var/home/core/zuul-output/0000755000175000017500000000000015137222105014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137240463015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000372110715137240302020256 0ustar corecore@}ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs$r.k9GfD -/@}6Ti.67[U/[Zo?E^/y˛?5o5!^|1Fbg_>cV*˿mVˋ^<~UWy]L-͗_pU_P|Xûx{AtW~3 _P/&R/xDy~rJ_/*ofXx$%X"LADA@@tgV~.}-+zvy J+WF^i4JpOO pzM6/vs?}fVj6'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uۙއ0* T(-aD~J'`:R߿fKS'oowHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO# #D"-bFg4*%3`C\LtiKgz֝$,:;zuL{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+V 3'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~\EZ~^߹3- o^nN^iL[NۅbbٞE)~IGGAj^3}wy{4ߙouxXkLFyS \zkQumUi_c [Adt:yG "'P8[aNw ȺwZfL6#Ύȟ Kdg?y7| &#)3+o^335R>!5*XCLn* w}ƕHs#FLzsљ Xߛk׹1{,w4ߛn#(vZBΚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMޮi#2j9iݸ6C~z+_Ex$L}*%h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacom_jZJ%PgS!]}[7ߜQZ݇~Y;ufʕ"uZ0EyT0: =XTy-nhI׼&q]#v0nFNV-9JϲdK\D2s&[#bE(mV9ىN廋;ɫߖe1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /癗H7c&)ߊ`z_Z{5v7xniP/CW/uU%fS_((G yKioO'3Pm:mV_g`c46A>hPr0ιӦg q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^7rN_Ŗ׼O>Bߔ)bQ) <4G0 C.iT1EZ{(•uZZg !M)a(!Hw/?R?Q~}5 wY}:fs%1KTAA\cCȾ39h®3uO0T-Oe+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & " ځ9g ?{ j럚Sř>_uw`C}-{C):fUr6gmSΟ1c/83 9:߭R> OkH&Y``:"s ayiBq)u%'4 yܽ y_0 -i̭uJ{KưЖ@+UBj -&JO jmϔKi0>,A==lM9Ɍp6^Ws5!90n݌ mr"/QI&d཯܏L!['kvl 4 +g"`DP߭44eml#Ogk?U>>a>Ҟҝ̏ ,ȓw—`Ȅ⇯2Zjǽ}W4D)3N*[kP=F =trSE *b9ē7$Ér٤/;C Ӹ!uWȳ)gjw&+uߗt*:͵UMQrN@fYDtEYZb4-UCsK٪L.2teB ˛"ո{GSi`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾{_]Вŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾ 'p/\խY\=z,M̶x>qu7礛WԓL!$<[yҘJqܚ%G)|A{+V#dFjVh#um`;DjC41P{HX.͵5 ,_C\)d^e$tC.7-nvҷ.鉉uV=tl&MM@<0 %U&%CA%vr9@M;C{SNۈ?&Gioni[BdG.*)Ym<`-RAJLڈ}D1ykd7"3F´[5mǭ*( :{|B_3YKSrk6m_dPΓ|͓n/𚽚p9w󤇑}w=薗,*e'&Ж0(ݕ`{azYsu/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmlp.֣?5tg罦X[nMcow&|||x:[D]Ch',ެJG~Nc{Ÿt zܳ'鮱iX%xQOݸ=;p)/Da;|'֦ I<)tKl3GIĨmIEQ«` RPZ(D2o>3fs䓯ҴgqmubIfp}g$Y4x>bl=pd9YfAMpIrv̡}XI{B%ZԎuHod`Η|ʣ)-iaE';_te={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ݑ2ƹڞ7կZ8m1`q 2_^׏޹(*exBaEW :bT:>%:ò6PT:”QVay Egl1$9  ֲQ$'dJVE%wmT{j`R$77.N|b>harNJ(Bň0aV&{H{Ll)HClba1PIFĀ":tu^}.g&R*!^pHPQuSVO$.w ub.:DK>WtWǭK~4@Va3"a`R@g7|y-_J5Ґ ޙ-did˥]5]5᪩QJlyIPEQZȰ<'f %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1s (9V ]rVmeK\4? 8'*MTox6[qn2XwK\^-ޖA2U]E_Dm5^"d*MQǜq؈f+C/tfRxeKboc5Iv{K TV}uuyk s" &ﱏҞO/ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]Ōvton5 lvǫV*k*5]^RFlj]R#Uz |wmTeM kuu8@8/X[1fiMiT+9[ŗ6 BN=rR60#tE#u2k *+e7[YU6Msj$wբh+8kMZY9X\u7Kp:׽ ^҃5M>!6~ö9M( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f p6mD[%ZZvm̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5w%ZU.]5`s=r&v2FaUM 6/"IiBSpp3n_9>Byݝ0_5bZ8ւ 6{Sf觋-V=Oߖm!6jm3Kx6BDhvzZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,W˲vtv|,SԽ[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXskwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnTV*P,rq<-mOK[[ߢm۽ȑt^, tJbظĦ FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&]iSCQ&s~In/SZ % 'I Ƿ$DEu&ݛȘPˬ-Ő\B`x EkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v_uM Wi·yT"^'~i6֬:v~m!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXAޕ,EJGk;3Ŏ7za`4ɦk^c")YrlbJ$:U<42%1HZV2w?P"0hA})JY쫒4_3jPp~aU!JJwDYM96ƾ*J%\HI~)bԼ WTH.I!R$ S{&Bf~o@"O`ﭥxRq8 U=n߃䏱pxpW HxɠɢdNsl&}Ux_WW?b_?+E06I%hƸf{/q)(_ee?B'9q BwtJʰ𔔍))avř2t #xm|L#t^_ c/B=0_Wpc1 ;sspF on/ʕ aǖ 48-(L]Zc9n1o4iʹ93o }apYz;vr7kZ?/|9o]ɰ7crt#~j=*&`^wƧ0&}}OC"] >mDt 458, #,t{uൺD?2$g }98ǦnKݼ4KݹK-l׎K4Of8,c"&&2޴<>m1-q-@w1B^K>b'PXa{}e_UG"`Bu]To't7jRꪚO %oIU.›GQHfjc񋥃&]R]'ZԔ{\BV~Yfh`Vުa~/$(R5 n5x[ijZg֍_r v|Eegq<%`N۟2C<̓-û+EA~;,ɧӗ3hKDT7V7v-x5 sJ>28AFO7Gu-gH#،*t1/da]6ym$~״x 7 ($>F@OxV3 j̛j c"w"'{{>B335[Moߧ3 D]ʐV~áN {gLފ(\ Ոݒ`9HMGJDEpzv39Awc?uqodL 7?Ѩo/晼=kZjBbyfg'ID4@l|'+$z~2j#1/qOɇG06"g1@3܍vO4U Alfyp keQ,]s4e5@bųM}); MHafGyԇTKwE,Dc9Ûڳ$8RaNL\Ɔt 7gy'gR3IqA 9PDtܐ`Oj]k :+@mWm{Wx.^y"zwloZXx9}?ɫK0'PoUudO*ݦx-V]KbvZbEG}AY{2skΞhÃ/ kukhZ$ c97ֲ#Lq]ˋ# f^1w u=; ,+t]y4Tݥ7A/ AgʈZkYILU}'.~S.cnI" 6LU~wΕ J~ɫMxtH˳n)2"I_E␵.rb%*ICݔѕqOLgXRDbJӢf-ڐXѓ) e>ѽC>֚=J/U=zwFUtkRWp}pE6۝v[dyah1%2>醱+KET+р8f@Z3|7:'W0bX "w_2ؑGp$;Ke,*eE>7VU ÃEH{a`("}v3w4laYT-*˶YKLb3@jdn2[@~"CU]CJ "UՑ:/$ ~٪{lOZw_H s3 V]sm1۸ ν%b7ֺo(l)si%B[(ǎV`a`ݵ#ڻҽ9G;0۱Yhۙ!_h`vw9sugw2t^YgU2״vd @qwehs ]H\,z'T m̙]x4\@F+w3݀ݑla\30[ZPce֐G5lOm:=;5y;1؎y w(;2y &+Cg%4dkg#)SS7 P;5Wy3S7;xL0@spH(|s1Tti_ ~:Y<㷴p'8Cf0.@d>Uoz'Mh0h"NaZer>b4?D|O;Ĩ u͛j cގYf>^Ǽ|O>> BW|‘uȯVm 6&yS1k;Rө\2Hc7I:9LS+mYA vERwjxc?;Y`IٜH)[ݔZHn9f#>[1^e AXx t>-cr;J&4(l(^cX:W<ώ߾z(U(r6NBb!pCETD ]G$Ϥq9Xק*k?F)lL V:#WS ֪u3P5_8ЏV&nO?2wSċ'&ډ!N+uY&7`%7% x,!o޿9s?8U}8?xo[9q s;&~@7 ̓ fuۥ MfP>k +" dC@딎 YL>toqC8 [gTnۻAA%.u-F-deCAKP]]J1ڗ$P!1!1֎/ %Y>U.Gmw=ƼO *`o'ihIX.c6yrw@Z#͘ŗv s&6;h]Ɓ:^' Y$<"U%RԤIYN)I("isvG8G4B8X*J+*#`M]!3_qi:nl rf\u.=*Վ,TDs~C&IQAUӻJްt*sJwREjw\h8ct ʪ$g>UjlJi@pnQJVtI6 {JxWF'1:w@~ O /ey AucXrn]/BE]ԶOI^ǟoGpys&@.F9q5©pM/O{3Qnyh"stPi[кXI7_ 54liΕ]8Ӿ5;eh2JW՞դy BԿ]Rg. !uq17%7e0LФ4KRf l-\N:X1t8<_=s"w? I[zd ,i:!v 2ϕΨ/+~*VKA]0.T/.ˍ#OGAߪwh_5*O&/ w_]x] "?Fߑ[)QOa -9E "C7H4Oɩp 0 8%:~g/BO2BSQ? @sGlԉnء4H݁ 5/s0vOhz5Ϲ$|qhхq i{8F{ ĹG~Mj5zN.*# Ly?,qzOx(ȽT S, vAxEa6?3N՝uם|U4_)CpFvyE!2m~c,vjUE}q˞#8;m 83XUхJNCiB^V H 9lPlsBe[6!ΐMƔm1쑣ʶVѸ؍Fv}n=._o@,ߜV8R u u7 ݜPq[n@Lz#ےPoBeB 7',l@h9 $4؀ppB Gh%eB #TlI؀hhB Gh% BtIs!JXWI@˟H$([~mC&00"M<`$a8@*Bku1,gS^h>;Uʙn_I:^wrL@#ܭkLP`b@gމohg6]uiaק]͇ѹ"ÐiR}ݔ?5WH*Ꜽ-Ƴ/RyJ<&Es4~8 9A8zI?ͨF1lZ MoHCzL{X;9xFՏS/?y?D OHMƴƐ S-5@{|!O eWmU:1/dud3_9D\ѠI+OCaʫdA)QF"f:x`E`'=CkE+| djKc2P0b]7l䘊>Z}p TE/iOOѧY}144x-vÇ}7g9Xs 3{ߛad8zڣ圗>@]o5rd}|㣈1&6m\9,/b!9 q]_9iUtJCd]u׭.Lt/id|u}!ũqwkM Nע5QxBg݄=͒?M`\ă5$ ak 9H+RFѕԯYA;uY}v}91!RTDeYO;RxP[!w2fҚ=x۴*[1 )~~Bu q_gDdUkױm A͊mm Z` e8ht9e !=f' 4}Pdx(Vts3/{}Ls {xچ)҅ ^C1kTw B2O#o,F>N, 6X/g vZ\u"Yy%jI5Ez9gMzɓ=HSԳ fá"C.iΖ6ТʳseB+b38&!j<Zi]u:~_o~ޕs|׸_ o-NR>N\?t: Lf7gTt&X%OtQ\MCN\]MߣKÿۏ7votD; l~/W^o^QuY֦c:@n~7wp v b?Pzo _LjckV+DA0aVaA>aF9^ih"n:~7yvnQG`SҠ2-F[KcZ 7#޺0X~e B7׏H4cjЊ9-,e-h]mHb>)Pˑ6CbXѪ<KJF\9Q 5>`$'S̶ŐPyryz*.b$}d Hc $":Rdtթ1  B_MkT$*Mպ==ͻ`6ڛԏ}vzjV$پ7kX=`+|ge6' 0B'\G'ԉ|64Nī8(ӟ9W)-gĞ4 /cY3 !{SV_R'֌r5U o²Pu lT#i, pv"&Ua^IARLHYeFp6#f>DD왻ڀx S x6ǀ 1f06N"Юl:7Фbqi#m7OP ܂4gڤ T'b7g'pJn'-+݅  jfOB&8UO΂nlJ "FRkMp)@4fQї2S_)Gsu9F?h4w|. =:iZ$o?69oF kbA&z X̛o ɱ WT0F>糹-D ykJ!!fͲp5HI((+s=\:s L5O4A753'XR5ԀSI^~>>E+ψr]hشEċV*!ӆU=%o1o#1?Y쁯8++-M:wʳV߳@\"{0I;pqr}=Bt_aG^yfSfx%18(5%G^=O.>{;Zu_uBn߿vƔLri$i8<<_(yt\|GS;PcYnAjVw(t4Jھ1 gGoHpz˜\1 .]^B mT0/ ^)1kRC&`05ݬwTtZ`l]` ).xT/@jc$}gcGsN=[eHޓ6CV1WꨋDۮ*d Ѝ(!V!qDH:1^6$8ZOd KOBu,/-Q Tkz歓LjSKHƊP!uҼ_&&!U70OB34IkT7()F? 7xk]39uƂ] /urۉ'8=K&EԏzW41~jgw.W:FԨ%HV"&U'`Re9Q*j!^²˜;Ƅ1n\H,ҰxQa ݜBXz!'A N505j;yuf)kx>-ab}hK f0'iL3 `Nq ~f Qr163gqP Wհh! *k9,z<;N,=`8xq0a ^G?.e4:0la茈]~YC+5v x.]],͝ğ=8ⱏꈩ&<⑋1,L!ڵ c[El0'7$8eM~1osR yMc0|R{X:]e4c`'z)gF 8TJȔ 3nhN1ѸҴXK:ZaC˚WY#h (g|;Ԧ1g(W3v\F/7(zKfzS2ji $QWM7Q*4 yC2)&LY*^oN;ݴ|S75Q+PC Xg bb-#^1(ovz_Zt9F1igTo/~gTK9BXPf[wėWwJj) ýZB"t)́C)CK,KHǰ 8d Kk׎ȅMAǍdZOOclep8b$x ߓբ ]o׿U.N` 7QZZ5|,Xl<E̶Fma`vF499‡OΚ[#ۼ}ܱ-45^eSe\gK#âFg')M!q`HtÿfpdsJ⬁J+Θ3uπPvAj & F $hy.(PvAwYN2Q8%Xmgp̻yu%qE8SG gV$u0xf)!@N#l~QK믜ZO6YF"U&jebn&x-y6̅tf?ݤ;I`>KIނlԻ{RAݢn\ͧgN0 .m}* &{Hpm\tJ@V2L~ hG<CH:?RyуOC_ڋiG9 .eY>bY>{7bpzwh۩h\T4 vuQtjQ GP$܈>؍LM_hBnwNIP zQ̬Ԁ fNʎ%, ܏`nx؏/׷dNZe1(1 S|HpTV<X0VyL+X ٳVR_q#Bdخ`Awot`밉x-J[)YqY^^U)F e-'XWxl.kֵ\f?==tQ?2b:O9-Ҡk*VuhFIO;N3fm|7O@Jm"SFprL]$_N0$9')Ĝ#Er~8^7"TogSxc1eOMRKk'!Ga1엛  קx/U7 8,jfh|臹~}e `4<q3aa#ۅwr>?-d,.1c:xp{vŚP$";H4HʱZ,Rzc8'eR.6!;4_R% 99xRwwzKVz:G6ĞaC(wnV@wLFc-K >?k7\=Lw?D*ޛ,>aOEmmK^M>;UVl*s觳`׹;(P? tWbof6'?|2럜Dcʾ`/f'Q}_D02^LOQ8'N!&>V37ޏV~#%0Ŭ;jܣ" Dѥ.yWKI lfn/w+?O ĽEs@JDB,T"=f4d_\4NJ~^U%j;fC33/(U;˫$w`hfƜ +-9E]L?0sq7S&S>r  G s,+Ucʪzc噋[׬Ird_#0>%MFn?&,#Dfp^}0nuiEXXU\\Iʓ:9E3v]r*2a_].%R}zkA%ge .]x69=@Щ_Nq2'fNq W9MˍV|\jdb旧+3=dI)x'EX+K / Y6rpp hbUH.Ů7ߛ!SU;-Gmű]"}QC phJ?+U.I?ٹUHGoA67@7Yg^z/blm彛8vnEh?-̸;/Ba훛lNVedxQ3g()M;9[ڞ92ݕ3Ţ=]̳AHIP{ZZU5XAkM2fg6ϾM>gA2S@isR 11ϊQ8ʻ=‹S]/͚S`Wfpc T̷pC+7"RPOoW?[|OwP6D(uçg~%_2{_($8bwH)/),>Z6Mx[ ;}6_r~>reRr;wZ~^LO8wnj\6ݞCD* 3ꐳHsJMV]aJ֭˜Mm˘="'oui'QO ej{7K87t/;Z\O/0K5"De dH' Zmy^W~CSy99YG֒[D0NP!cXGaΗJ*wU k٨rjI<Ǎ;xd1V0^w[38J^khW+o^n2_p/ӷILzW?eeccӰpUˆx01з-aZ}m7<}u(p}r,\kT&8eXgB-\tg8:S_;A/$b㵿K8ȴ8:q@ߟ_r)-_5=5P4zF|Uߵ' .+(+^z8>NIksKC kȶ<E]^BGV<9"Ǩڍ ڥY^paҚKg" n _'e2/(&*z0fK_ ?* Iqw mHDl]T!vÂƕ vKLjt+ËH4Dk$-,s@4!y2$E4Tc2]}ꉏS/~]Sk-5#y2 -#:xNE+ҙ:Hf]j7>@k|GBhl.јx.i7 {J¹@NvDV_nT]$r*U}dzAfGb:/`)-_.ڦZ 2o"xBp["(#0Y[p1w~qIs[iG pX`<ﳸtdq $Z#rU LzzA)p)>` M BK d4d4dphk$cS*_`(*Kp<"dЗƎ_z>Q@uex,?D^(Q.UM&YQd "9",Aio鍛qw&G'XSq,ioo?M9c|o(FGof6%3Q<*3-ڱDтR r)\  9<~Hzd'g?~)Kw;Y+=o,X}ۋ .xMrcy  iPV5􋫴kmX kic զ6P5M7 R;jiyi4.4BS77[ִiw yr3B@{<5>ޣx<{s>?F g[l,b:xj/oQObvyB`zIOe"Xr~Tv=rŁԭEBhUQS3F+*K2W`%\҄eG B^9* :ė&%hkQ/1,0o!@&!THllԐ*VgOa'cvz%.׀ #UYn]v2Ifk[SNA|/Xi4k?˂@|y{9n\W`]ny0 цiDRgD )$^ozFM/8;WLL:xU3WɧZ]EFrj%< S2exŒDkKp$A\>*k20XƔl\DX5% kʠ!)i\L!ҋa#Τk tNL HީUFC n2 @0=xLˮ]j[T:1g1xǑN%0H  &c!0,=r)DS6ܔo [1ovc8fSc;8֨;MʍM&u  b.Pd\r74&#H α|sbGAp\ڰD DXwJITԟ\uGAФ: `iJ!Gr-ku+t{P"t j Q, u øZܺLB6cNԠe\T[N^7r@K/dy1&"le 3X:RcQAR\*Ձ[X Z!5|zwUƈ'Ɂ:dI&#Xz*~X7TQ5>U$ FGňq `Y#@ hRq!١+P0'eFlINI)BM*A(0w jKeRo4AGgiijxa ZPaษYxH*fݨ7R=bHJs>5cj9& KQ$%N:MT7l^fk=#;eCH'aWyvAs•4uր-2Q*I)b^Rq`t5(paA(hB)f P6Cyp &Z)-!QeT{;??@$#H ryYb)psAj=^eNFeR'@ݱSS^;%`tlȔ㌴9"PB)p)9 Ym\ n=2VIpE`' HAx0hdDg:>]۶WHlf/4 ICZDxǶ\ɞfZCJy$YwS["<<n 7!5IjEcKR% M;c-E*dd Qr'Њ|n#{5q{*۠,HGBɄ&iv0(X$a#McB-2"J4dt )㭆d Yw 0yevәXSn,]|,KVD7@n\n?wjBAiX : +EZX $%CIA=,n=nz@ַbXܠfE` (9#{f7oM|k*^u:[?p #:pVI Ο@@Bgm-Cj}e Z/w^\Z3[]<"hS^;h7.J[`i-,Y]8_| n1~ǻp?~-wg6 "YfŪte\܉'|9J}λ@ݭ7ڷ+ $KSڳ/.ZW,bX~Ko]R;znh kKUr2 b!XnH8v؈Jc`f ʵ6 HPmM,,5(ID2ހ*\H%K̈́IMqbP*b:bbF*,zLe- s@%ܞ[Ze9VA "j)B<X{b*da',r &nX C9} &'CĚ~%`!4vp\pϊZ_6%`%Ļ-кb݊y&kNpԭP>]k{ʎ(TxMlj SAz0׻iXa};k m՜kޭ6pAlh9E rOP9 Õ x0l,]멯V[3Q.UMP}S{aWAM~ܢ*WW7uŝ# M|XSXO§ LYD{> o-VI.VXPM5$G Ə~x6,?)1eA ˜`ҥEsnX֮.^ټ/)]ќqu+Nf|n*&*I,K j5dr K` 8"(ݵS7vNY\wMʍg/9ӹarck[ftؼxp 4E9EGw\֓*S1}q~c"kh_kwXx L4Ȫ٪t=Ym>nMPVӋo/Tڻ@|DWWؤXR1v x?&+nH ~;*IwLqiSIW_/'0?&N]] PՅ2 @Ra/1:W?f_/IΦ `"8]xf3Ɖu΁u޾ݙۿgrw[oM5T7s?3vS[L`οKQtZ/񟽳vo암9{Tjkkˇ5ꇟ~[=QEӱ!z[quv}6q;;?~+O1|{lAPuxu5 <&]L_=H$|bfA8"i.JBCڂr6\CLgE &~2V0Z˥:yAR75֐ :PNsm5-*y:l5=Ұr{xvm-d=pR#t[ |o w\t,x/j%VX9E0t3>Zw^@&^Ng{8\c63/t~zuy>JN5kp \3p&lg׹ʵ:);=l,i^M>`2w uOGw]O;{X&2'4 i-EK:6gX܀וmi-B=w7#T1uڷ\Iivnii}g Gcן]셉oUversaUפ%%,dzoxH< >'ChB~å%:x|+K}+ FRnX{sqs¬28n6uɜ}5@BZmK3efӝr;V]J_y3;/V^d ț%xAdum|G4 #[5}l>$ZnoQaY+]tSU5LwFã W_ nHrɹ|8O4|kg٪;ٱ;vT{R8CIaסK᳢rOL 22RH)y.v㦵Zcm~QIv\/ߍ` @ ini?f3f }9,]Z?iq 㩾sCunӊOE.[+%qyM#*&\ͺH#ʨR;1T8ϳEnƫO6v)_swʼʐR~?Yb}9WxBv*#B& 1K*NϽY&gǁ!|~znaO7'xk9 bL`tFg,„Scb"/UmbzP^pv3;GǢZ{"m '?lCc fH4W,Eϛ/Tyw[ !{\X l ZLNY C#ȗ*ն@N/=sf'%H z AD 5lėՌ7 ʣc+@"_]/W͕Aƫ+ %J\Jij<eʂ |oh%vaM-0K6FtW ^r&<\p iK.Ł) $Nȹ#]#`JswOؔȗ٬? S9(ȕ3t$תhLܔ<8)!/Uy eH3RMvBF5Y|Ƅ6iLzl c ٝ#rs3ga\sxDSxQ--2NqEDTnYovfdH52:nт3h>#:c2Kvkj*|0.zU9V8gW <>W /UtÎ͎a9h(2 * [((KA`e*njo SK6:c{GGDTzJcafTj)2:®gsvZU|h3C($-.LxQ9H4*odL/HupW]'$8\6G.EFbAqxXK{Lxmb97M*iY_ zpLK_an4A`!#ƪ<=n&/Um3R@Aa(;Y>99zD2Lʈ ܸ DT ʞjs .pR$㒸Vlwj GuBFǝ%{uo &dt=lfgG kR?t'.ϲQ [QPӻIKz+Sll('+(,82XfԎ"}R1ٱIKUWI8tw `B 0"ZxᘞerS| ^MGD@".Gmx p/U=uL/+=vj%o=Kqwx<;׺ҩ}=΀`-MsI n!`;_j!&d\l1 G773^7f\1$1 P A'(Z?Rb}ůt(&d.KeI"_ aq_o&P針"4!cP!mv<]p(WWT0-'(&v+ }VZd!eA vs$THfz#=ςg1_ӢK9!㕝Fpv)~q{e?#Ya8bew\4BCZd܄/Ҷ!wa cOjm.G#܊wWZpxv‚a/(-V\ ޭ,t <ƧDPrM%qfz,E#\vuGq(s9T ip59,eR577^'Pĉ=fTêcJ.m76._jVBo$< ڃΆ0ް}ϟ|[ R.O\?˗7;v[xR+P7򁐞H􊵛w3/Q'dtܘj=TܒˆGD7?a q[p?+JkHf'%H<^d`Ѣ+Q`0Θr*Gˑ$Jo0;!Ivݩ5aPEWx1ޱʅB}Kݩ:R&[H >/ztW@eÈ6,aa^eaez>`\[r 8midq_ Xl?ta=;V3%0n(%Y3I=gZlGJN9jU/H=vEie;g/i!-l8 ܀/ } ބᥟBT 7a|UBU1zp?݌VނS4' nR5g#ǸH"_r^'p3z)ّ ='G\)ީ̥/ tihL?X7P=MAsЛ IuLD5|S߳[o2˄!ZM댕h =ϓB? {Aj豩ҫvx3 cqg=H"z@ Z]m4}3H@{2:F3 \Hzҡr-kYv<1wl7hY靷R4rRqںoQ5; |/e%{bJ~K)c1{ }P| orx~/S?iw}ӈu{9>j{'nHF-'x{]vE"c~̋ItܮW߮fdH2:npᒶۇݣFa?vvK:^hw.g oUe'TBM8\~RτV$)!c%9aTwaEy:KʼnVwgPKLx݄k e27 a]//<9ΐȗ58Yl1rrp*fRs)\f5cC#ȗ*UIK?hpzZ9й (Z8`Ho! 2NqDTVs}p=WmJ،;v"q=>=)ch5YV]khʰ] ;QнynMzü' G@T4A2,<^s£l|vm>[LesN_3q,7-]욺Eahѕ !.szd$4]w8䴶m!f 摐_v tgY5~[UVlcz03 /Ug! %eB #x2^﹟xsQpޒL''>JNwxi{\B0b7LwZQ֠@'^)^|_(SMf?,;}_\fzElgğ6UfW,u.5ZjǂKe2N(8%/}A.an\ U ȊFE\ 9&/Sސȗafx =|YT)& x N`۰U&/p!%Mn#݌w}9̇P~wVr1qVoxA\ 0!L u֪&3A&< ²5-~JB}!C=)?ٝs9[G!A;OQe[u#l7 T t15ps=ߡVnNWE˔ 2O]?GI%&5>~w޵49#翢Fޏx<z}q ViJ%%vUw; %H1(Pn2ӝDBH ic=yQoxːc"I0UOވ퍽9/aƳ 4/7!cG)qZs#L/yN_Moݛ'?}~LX!W,)4Ґ*ؾ랩`;r^HtaENr7TLx(ۗ QmrHfjVӴSdg^XX)e ymE13ʹB=<R$|憶-Uʧav4[sLJCmaϖۄ+i1_/f-+~l-A3q *.)=uix,C$v3fgpTo/PՎݚ毳S sa# # lc7 wO#ut}@{>F[qag5m72VZ >iP om!uw_KnWSo+B1Y_=1,6I'Dk7Bm% "J(-% B_w:K~(V bDi"QE*3 A+Q 1J!}%Lt (ozŶO&9@}۟(fz4+@].+c$M&Ԛ AX՜2#{#V >1.`+r+ V)q8f`릦~%V buSOcRJقV WM&af\yRj9-""`q]Z]Dꍙט9bw1S*aL8*/X5D!C%ZOc ro`q/_ FE Prm̕4$-օ "#'YeRΣ,y=k 9`B4ePQxIFKkgc9V +qsAǮTF5b豫1ʱc,U^yI@M%inIpG)? !apʆ] !dBrʬQceabGW5&ׄw P\Ocx c=s"B\P.\j:YaO/Dkm0V*zcHf@zc/ߑuyˀIf;Yk( Z_-SEt1QE05(O*^~w6;0ߛ8 凝P cQB9rsDPO RQ >(3^6ͦG?f6#A҈T~)7zuR'/()?fއF0Ҍ [p2,U[)ʓ<|$V aO))Iy.X7C7aیXk`֯v-:JGԢA|I>]M$̩LI,ʎM뉟G\`^y|e\DI%IlTݤui]Ϧ>N{l*X _ CM8)5 EA1KtJ6iFKˎSGX&a 夔t*#DNH \-T94呴n(KEPqFRfXwaciTQ&YJ3֟zvYʼnT@vth[2̽U u&l^;@}-d.[قd TxZ9P$G:kXd f!r+2>ɢJ-զ]~N7#_s1B`4Fx9&SC_wDkMx C1D 75?|)L&  R&?(R$aցNw"juOcEjpʲi #`G.l|$d E@|tn.ERv2;(Gp} 4_BJøɾҔ\:+p|:y}4FP’ **[X3Lr1c,RM gӰgt MIRplT Gb]H@Tqvx4Eww&\Ҍmy\#@#5CnDU) EͻW &o0n!`Ң "w#زN]'XU/t`|rq2bBuߎ <De塐|@kMz@iq@9Mv@a-U&s\C>9qNH/s@ƲEs_ٸ:xNnvOc )17x:ڍWR**GAB盚:&=>;gS$|PJ١J~Q'hed3a-(lw;27VfE?*eT۽s>[Qbk* {e0v˒ɔM u武a7,Tg@9a}2N3A!#6zB|{&hP$a /v:"G{ȓ[=!/Fq竁w7;<]: %Y4fD}!ug NXqSu{=?L|\'0LFL6;Dn֧ O{`g8qlT?y#r;n-OtT1n4r F@ wqԟ:Xw?2a(vKgtwR(ׇ e*x\XwZ?/Bas$+H0ĬFR|Y5v-R4$,uh%TCh&9 5 = ]W]w N>N&]-ڮfE_qR}_M E|R>l.':] ^Dilq9ÂGEr&Өȝќf2j+M!&dB<\;Nѥ /ǰv{L!p4AֻZ3b#+[Fo!WIo)Sv4N\3ek# V7UI^fZc9bPC3- Ty\:Z҄2PS'S+P88Nޡ{HrcY{g;㢠)CO@.^ՆF@d#RP,Ï4mOYvRYP:ir0m,Y"p ?sAkY D#gEu207\<#2fuX yN!erp }v%U.X(dDPDFYn ˊ9TM:s0 TLyb3; p?K^80Ot dl(\zöˎ6>22k[y*ykx?pbosb{!-mudûfgUMڶm}**,3}v]}deK^Ud02悵fb#@c<+-͛7dy`u͙քR|ȝ?5urNoLdTIseks50)+?'Y &YN"k3 JeEmJjm_+ gTg5Z%lv4͗o*Uv~8jYo_%*F2|Ť"nW+VrW+\VoMAvk![Ock$1O@t~耤Qr{{%1 Cr&HQ !o"MGqQsMc2Wd{悲yJ2daNExEBJ vu/p:yiq))S% ʅi.kii?eZ40vC{c1-dP7V|uPt7ʬ(Pk!!y=hxFE{([u>i7{Q E:ZFkc`IUA\}A|]iW 2!0!r+7qd޳6#WApIp$͗dz,i={rf;"bX#•6ژq1ﲍqiEř:5h:b0l~ -岠|mswh8.毌S%3g(L?a$LLM  ˏ^ޏ>>G?PAWºkTܖ'j0#t!jxΎ͈)"}Zx) !q9~ XnEz2nj{*Ks5rѓsqlۆ,J-@3TC7%+~آdVѪ`(*tZ^B6:Zh"%9&8qkBkw>*h.WprpT< CXxMJTPEኳpedR֨ oN(WJaD `-^RH$hFd4? 0nw^+^[(_\C-X`սk^7Ml5yʈ`JB~Vh.KVЂ>OT0.7OrgcK ϟ,(r{ H(LsZʩ*R N -:oFnh15`Kы^`2 z(/ZL{75G5`{ЎM\j[$K{<τpi& S/4Th}>3ixkU׋GA.1BgJܤ5v鲭~0?p}y;ҿq?13Aȗw ?-T}6ƿ&<[pK,_ts@Oq,o@0^]ݢy&/?l/?ƋTQoy5qr/5& j[L'G(fC߿JeYqll/kK[dt_,g~0Dbz8 3(8?ŭJJ dn>Km̱ZTIomD ** 7~v2f:?dɧn:m[0.֣hߛx|?ߦ ෇qlU$nӜ<on|}xOœ C#<n|ӄKOL@eyB*k+0k}86K?^)h$֌-&2)V2?lwE={^|[C5P]2"$%HٺvLd$`1v<fcB_ m_`Ѧ;!8* X2\('5-6#\4ٓ3)~qjL׉(T$g^e*IŰi 8#OcV%4}Dkw|pԨEU8ȫFūtCZT0])v.NăYF8.=54G(6ʆrt9A8e v,Ds_wpraK2!YdDQcAkFĴ u_ a:`,}g{ Ew_$ FjWJwUlgyyR!*;B0*Y2ʛߜl!tKIPi pQG /SO rG]HK!*e/7p&QIޙ{΄Ⴝve%5TVr:n2 NuEUo!uHtT-dl c);$<@+dpc`?9wZO±N ͚fkA8a9۝ \qTװV8Ϲ(sfg>-/m݋VӷFٟ7/tdnc Kh$Yź8'c` ד-;(1;oAvl|=-G_WVqyEFyWD=9w7I[ ƇU?-"j:kX[1$X~auV^9TkSzW̧U4VAE3]v1_A%Ej+BMhXi]~umԕDn|!]bc2θ=A^/Kܠ>:g4nJ~_n/p3G6jr{踱[BuG1k~>_@T+8z&N.ubdH3655뎬U<9./{;1:SS]`&ipD6Q&+sp\DPR%ѕ.fC.sFe"v̞o%aVH &p@}X8U.#s'%?wmэ/oy=ӏ_:خyvЌ (uŰ=SAuNz.Yȅs*74&5L7( 9go{9i/1?LvRe+=q{ҷUw8`kLY4=Sɲ R)*b1gJ$\8ûϹh;^/|sKމ#S$B{ü7zJ|b6(?@ᘍrg*ưzu$M1xގIgbNMtLܚM3Ejޢ,ƞi'0/k]h$39:xT@EJZsR{n%FdTvk6 B5p=gXY\]  @?d *8. e5I PL[XSjH}'jyGQl3KD@ښ0]\nJv1~cӄ/o o6^̶Q= Dݸ-K1I 9\Tekn֤Ӛ6Ԥ"Zd758ƜWrd :bj >&;Ly2: Ş JVupǣ|z*DO8{=_@5h<>ˈ6WXJLW4Y_dpKN g-9gt_P' GsޤBTԆ9gjtnЌƣJ;GŹ4Πa4#^2=3Cg"hy}%1p"Xu8c gk4JqLzUelƒ%` Mrk`=\ w9N'M_PuPzZ~ǂ3QTĻ Jg3q( y{gމc^\n=K{. 6ɞZ#jcF֫Gl N"HX Zb! *Kdɴ˰iœG)XO'_W**# ZaX U0ky"gۅLVTU "&q[1a1&:ULِQF` G"~I{N h6!أa(g#&&,1:eKˆ2 {M`PEO CdO1  D5c[x6s-"b} 'Aue'9\(.IbQ ;k;q0FqGZIhca cU뢢X8 phr4p"\,-Qw,Wdi^1B^ƹq'֧sASj=0F")0Q/F5ty/ժi\t!%zSG䆝{FitzWM*X8[/^p:/F2q=ct*PZ˫VgJqnBHKtd/M4Hb'n܅p'.G2}U&1uNX8 fFL,Ǩ^^z. L z -cJC= 9{Q ݃W>m㻛691mX8L_CvGYYRƌ%߱T$)|̞qfn:x0VR&qԣIST8'>6 (F$}[֎ !ޱ8.%!QbcHhtǢP]Mn=]խh|3P]F&*?MnӶ-NT^p*ӵWӕrAmh|Ŵnv/ KJDޭX8LǧxJ yBd/}W w(޻-\L1:H},A'nFu;n35we߷*.0Fט"-HE9X8Z_&G4"O{0Ee70,iq߱\HL%x֦\hD1:Nu꘨~6VP{ϰ:bʰ\K`Eq-2wӑiEU:8n"Zlóڴtnc$ZNxHc v T!. 0iil,&^9;@1-2-Xָx^GT}%zg?*G%B0U>MUIӛdj-zD%7ѱpw09'zm߱\D({yl%>*~!EENeHo|}zꢲmF1X8tPʨR-㣊4ѱp/QF/OKq"5}0FI']Fmt;C{p(}KxT%5*LaVPPzcKd,F=f,x|bZP(4-5S-w, ~o>|F5ѱpD'h֭tnPJ>Ѥډؔ9pqLn0~4[l,TVjBk`!: ;h{w?-MLtS GS*oG4ѱpd]?K#)!jRI@ 3#+0~2$A[y6 ?\|wSH' )$]/]ܙ1+KKro ޙ_?WW`a5/@ 'wzYYImA~6{uUmۇq~~[zUͲ^rb~P :XUhT( ;"%??WPc`q8s~=\= vm`իQwM, 6}eGyb%Gzvfû/d"@탾o|_|̨/[K t<4 _gZ^~ק7cÃd|Dd?L}R"Ŀcю7,m%Y64/̧Y3ضL ?\Oo ǥ1T)-bS h%#-v)GeyĔ̛כl,LYp 8%t0˿a K/htu4~0gnpǃ9<{CԽ{15y4=Wwx;ssǍ͖+`+MxYYVbʷG{5bfA7/\xuwv[po\86|;D;8XŬ~?o~֡}d>-kS_xn2} Z[eo]F ÿ?@kˡ@ePllFቅ̼{9X F)Hw i2o(?h@[|>_^_c"_ _jhe6c`+@Y>`їa'*,@5Jn";o51r̚FskxxM < gS)qں:egH7N9i )T%wy% W8\Up/;0kO0A!n:N|ܾ10pG`>B vvU"#m'yp53w )gU ZRV Gk3D= Xp"8"GPHB#֟GU {"cQŀ[K4sAPHJGPHFuvvxRχ\l[od1\̅NQ $Ƕ,xR0\%gU!Jk ϗJM'~w Wjƞ٠_FzwM(8hvjQN l;r8⑈$w4 I2VPGWUQ!17_'^pO2Ik^A5mJU&N!Zzu֑Pzb|6H JI¶kݤa>PH< Z^?M! ۉ>|KSR3C!:cLGPH| /PHBj+2<O0A!/.ﶲN* RZ ;|R?c?| Sܟ`CyibStI]nАet(᪂ M,aLfнHz>vA G3sn+LS rSCǘy]"]N0A! :yVL٠S̟y)'b> #(`^ U:BB 5Wb:v`>BbԻE{!hi,0Sb?퉡yu5oD ]\`EDU`.*/vԚydj'gLǙyi / #(`:Rio ˢ %Wc\iOc(`zT9|$>v `օY(Fc~$7̂pZ^(MI TOQv~}%z<-JTPyuŰX% KRVV$oWgmdQkcAn{<\*q&:p3.qBN45L:MJ@Vs1Z.&{ЍGtv?{0g>$ ue˝+ǚOhWQBP 1Hk26=|xh8' c**sL 9BSDcK)S-{[PvRRhhAڡ`U8"7%CJ(ZY>JỾmM}yi+K{}nDqw6`deUGtYSPdrƽ7 l2^yǬ*E(ypRZFJUɾCzqiXzVI[gB!chistUSL#,8ZIZ ҄j7m\ĻLnnXW?_d0}683;([5 s?r?%~.K|EPIHjeZjeZj_#T~K&gsvl~F".1[ 8ǭ9%U<ǝ^㲒 K)Ta4`.!X Bm% |˥ril |4_.OdR ͙,2,r\"W,r\w0$WQȈ,WQUr\E!WQUUrU* }bRsl* B(* B?wYO3 1ˊl:J8Ƕ,yyR0\%gUׄطVn:|WFrHf]'Aמg=̆4#㈇z)yv+~}5X ԯ~m'ʈsGa j3vFo, H~ju`c?} 3$@=q 5ŞυRT񞸈*9Ɣf칶FaHrY6ZrY2&M2Yf9ЬasrYF@9,g,erY2z*geDrY2Yf9,g奝rY*rYGc̲MrY2Yf9,g, 1$Yf,eC2sY2Yf4Dm:B<3il0)fV]0$eaEU9n 4?ٶQ6g-,%w%# yFɥ BS3~N5`F\n NٰEae1#t)*N)WҊ*ƌװOi$7J aK^PAT''$J .%6cBL-4ʨ%%&tRc9.<=g)My̲yv̾}kL#$:Z>|: "8ȇNk4nx- 8t6$[s#ch D\V"5eV <%7D7O98a-] ;Z!~r~|Py]Q,)9,ḛJiƭVc +X^PVd\_ڮǥBql$d4ʋugƨ?N!D#έWpK46Uq%aUy)Ia Y/J)k+ipTȲb^!BW O^HiR+wh&V[OJ Uzb޵qce_|?ޤ[M 8^I3HKc'vi l8y9E$t4C09lI{#G8oޝY@j@b"p'N$:EYѴ#4)ݸH`G 鿞j㚜jvq=_|zх=eoBb+WjwD,*iby]Cϒd^fw6L()Zjg`woxGЄDfYeo泐\_$!?XnXhaeWq^u$|-]-Eˌ}P97=gޱ?4s||g~$V/h$~EhoB]& Ŷ$ЇҷXAI2bj,$_%,slsGa5̵!+""\{w9Bt9ojSzS_2ܨ Քޢ&,በgߚ&;뭈4s+0*3;Z嚼ʧϿڀ ð4sb`~#a-j?`UUG{=𦿒 $7gݦ\=k!)@Sٿ+4x|/k@x?g?H~].#W&s_Fd$ V'Cy2y7BK.u~U /4 0i'|5_? oL[[mNVVh6}j7WI+5V3V9Fss/RP澌1!q_{@\?>~oRwOOzڥTjTiI38(IIpE?Y8+zt卭~%=1j}5pq SwA' Xg=,}d=abD$)z]Y-U5*E h}kI}l@u?=eR^@dɡAg 7}RwoYxhy3i,w>0ڇ4ѓi$%->q Jiu}nm}ZR:?sWkYӥ~X=P=a]מ%ھܼ^p-8[ %$֛q.7b nVk؇MC84*=ʍ=R>$lcOlY=;GSWGH&HT TۧHߛ~z!)zلUBH":E ģN띙:9:ćafCbOb*5ߎ7;Fd<8H~gyݓaރ->gNgO~;xS Xk`cEEna[B *8y C(BR1F 8I2QZ\΍Hj$B{>3c +̱),bR%ETdJ抽H\EB[f# Em)f"S R*rD6Tf=4"H&R g*SfaL#s`J 0,a@Y9U =>" OVTSk,Ε,5 t`TQI(R'Jp2yyS]+"HLÔLH^Hc=߈qĜb ΟLjNr,>EfPIJ#Ii$)$4F&HRH"Ii$)IJ#Ii$)$4FHRIJ#Ii$)$EHRIJ#Ii$)$_Iѱ;sL )@czok ,&~|%rO̸+W70䊔Sr˾c.?2TpȎhڻ X`+H3ܝ  7-0E* - x=%L06s+{pY3{H$͌I3o̤_gf# lHbU:u s%-"6#($!UӅ0œG~Ðz\+:3]DD @ I}OE@]NFG8ж=\HʡcB)7jrTp|.+ T 3R Y&?$(DmhA]o={}%{a7Ydi7}ɏ8AHȺDD8C1Ȱd6ϙtqSLc$%%\bn)Pe9s0 ٩le`Cb⋻nG\KQuܭO7*ɻ?y_itc,_X$;ETq_ D##ނc(i.c.ǘQ |jXD!=(8$YmnB!Ӝ1Y@\AsōZ+IGP5w&q#7>R j5W>"V$Fa4K@,KRbIf<ϥs)&R"̉R⹔⹔x.%K+!J%q9$ky,P30~g721-nW9ܸl℮3LUO]ϧԼ=/O}ZHh\1C~%ں-f`'uaמǏG~ 0ݤ$rA+u|; W(\1TcUsgo~|\.fW,W5yaCXq0G畆@Ujߙ+W[x%))A5sj^Ȩã Z~݄nBTTOhMwz^zKZ4i p%ĸ6"$~¬ QcV\*@a?_"0b6t<?K3c`e^ga"n~AcWLc\ݾ3-lq.qҀR͵pwRiU<3_?;T}ߣ5+|lhܣ٭L?u&5~p $F朘ˎ99'2DȜs"sΣ)7VXE> hi8r2$YJDtʐ9 "~$Ft;ΏY+ n#yn{-տvxS'|}gS;Q43R0U)RZPi35Aa3^dPK#n"4T0KlÄ2 c4Sڲ(2#\KLDKa(g=o֝Û7g/h϶Wڳ$[}OF6%%]r+a4-CWuzGM{@9]P"+XtMv)$!HReV )A 7D*=,R;*}aֲL^THሔHM!fJ3n!XG8$%A^i UJ2:kY?V!D!έS-L3 Dq%aU9)hhpX6g1zjő2(Z(gVXp-2i,CH1G_Gڕn[,i4):YP*`.tP ,s>9G*2 ͬL03Do4 m9v7һsM^ )&Bf~pb'yػ"yU4hZ{n\r<7ӗ[r4/geE]B7!CGYjwD,*i2x=ir2J|;&^㏋<mC4ah27Y{Ul" ?D~,v 8:Dj3>r2/\fηeƾI$ AuN{c|=>_<ߙt9-I ~Ehv2BD0(bF3{o pϧI2bj,$_%,slsGa5̵!+""\{w9Bt9ojSz 7 tD2ܨ" ޢ&,በgߚ&;4*EԀdrM~eӾF'xmÆaɺLo,we$]Kh`ƽ`^/xnԐ{YEREH*Rd4[QyDDq0#qh_.)K,Q=U\4᪎v"e|9+Gs&J%` XpH"RBȄQh@c;hfMw8~KE, &ռO?`pyb2hM~4A@xc2 )LT>w}Ñ5Qg2Й!ˁtz+CI/j]Gnm|'kUwI5SJfU 92_~/sK-92_f92_~/se92_ˌre92_`+JCA92_~/s奢#8Y/<Rd k,)Pp2͐0C6r .Vr JqTQܦ;Ew9l1h&"VND#ɦƑ&]qu9;]^<;yghOA-36l H .{zkDm|x ؂]#Tta)1E*.᧠ϐ/ 9o3`@A VQD< >="l e \1:-@%T8 y-MϱRR%nbi1Q Ƒ t6yrp&N'oAGOZ(pԶLO dfoLНE5X `XK]uVL!쁋1%0,JGNBv# v%E) 6}#{ap*f Ů4lߜɾ!X ՛}1Ҳ)kU/ǻP2㺸Dx LqY,C"X+ʆUؼdfI6A0ɵ r B($<3@cLOvj$>{ yqWSv~| ؎a/7 :nfzX{,Zh (er_qBr2%!$$HQ&E^o7^l{,",rOtPs).JVX)7ʳJAi7W VnjlV>AzVvuoHWLп T6 T\Py̷l:ؔ 0ؾw{x]yhM05[ e+rnz?!?rݝ.YjσHZ K26oA8 !)N`ƹ*:čP7s q@U ހ!D/|Iz.YBAá?z݃s< *Cw lG?Uܯn<\\a Ow?1cj]b%G,Dד/1{L5%9WS>fLߪ% m;'#tdBq:_ŋyYC4s37f4|NQ3;ZYczn@L?;j~Eʢ9j ?TQT-kuLdӟOa]4v4w_cfɺWn13F?[d5| qDۛt2i[^/`GbJ!If^vvHZ>LO_.E:vgyͭNlWu],4*k:&PfTetۨQ1,~r'!.XpF_$ld.l-pK$g#TRXWNC#>X @^I =.p4P#Xٌf^d8vT@ wӲ-^ lv8NN@Xy`ak=(?H@'Zx}V~Ц)3u6镻vyh|{| x$ )![S띱Vc&ye4zl"hnDtfp$L<\0ːia~9N?`.0 ,- T)`+%R >ު.}M>Lۘshk (;oZG.>̖ 2/OTrޝI(֢9-brK%`3{#32x5sf"Ha'v-!KWKZ(moPYn ?~l4۾B *Yj xt.P!`V0/2ZDJGt`ZD"rRuqlZ$,r!c0)fe ɇ(ZBxXGm[.,cYƮHƴ (QHLd$&A:łKsg H\*ԅmqZfI˒%OIh A(ғn|kU]mG'*aoJ8SPB r.sܾ;sܾ;¾5oA?S"U8" `%zS=[!  `\PbB:zmXo@FNG8R] z: |ϭwMt~Q-* `?AzʓzѩhH>byK4bI@Ǹzht:z\rپ@+zlROWIUnbo?"bfb ӯRKN6nA%ï^^ᜫ{!'ƍӭq+Yi:%cqRڥneKLىٓ]^ q1m_Z`)ubֳTgsln{wL\ U: MZr)4!\R^mF>Y!oT#ػuxЀ 7- By,jH=#3v)«R ۼo{Ty =Z>q}T5YB/uv)B]WA1x(~|J9sWWx)}\|PU^;X% C!fH ^ wE%Kʾ:\{j96H%ՠէ0wp<}\ŦNnRǵ.-Ctm&)c'>`۰9Phxy\'->:Y|g*?+^1T-"Ԟ] ߩJs5R=G鶛quFFZ}raU(+(k7e@W]fu7ȹBeNFy^+* EXJ!^ؠL N1h+[9og༣c$bt\x CQ`%}JS#hEH*Ӓ 3OgɆJj7rqCyѶL>q%פEшa>ej?jX덷W9ftt&tDƤstħy_xǃ!L̉B" , JÎr}^C~kͯd#8tlǭ]ьS[`Ad1#)"+JiLpfj$7 ˜[-"T{iNKM)qZFeaAxZb"*gΦӲ4}Cnq nL;_[mkx{qCT+!]v Yk >Y=M;qJl) /HmtQ))D=O bQG@5^ 7zndj{ V#HFǤ"u 3âҌ;$$" (O _ MLX20]/j́?{Ǝg;`}=81^meY$}%umI $qZ-6XUXa=(m5H#jQZ)&r" hḒ̠$-ǡ/^[!x lpzӻњ;Y8>%_g4҄2Xڽ}/I?QD,[iv{f^\i{>w/ios曲 =9Y: _?+!%7zrݤIx^AoG^'_jcf T18WL={<^vFbD.ra'x)nswq`gAh ZOHt6R{!6OW— HȾ2g/YOf 6ٚƙG*;uVY\<L2pל cm Ȓ3oMq=i0^_+ <-S7es,8~}@QVwQ>~ʂJtY*2?p52plBQ %Y*ډoGsS4G/l&iαC}^FPHD!M*nv*h{hvZz~i4g쮗6ʊ׆e(ɩ1p Y5EtT.2U`:E,&ct>"E& (vϓ8}}'"j',"r=U,nTǙhsVzuk}XțyFQKx['@PXYe4Ͷ  M aAK+QCY*oXYR!;D()Q)3UP`;993oOپ'}L/iϫO%zN\v!b_Z ki+͒+ֈSy |$`\q˳zb(P~0\*,' I?M ^+"c @ @X>ݤ F +PLB)֤[󨫴3"/t^`$=#ˌ#rjQ@򖿣 ä|o{NQG `jw2:{7v43mrZ3@H\j` BM3y tjJ+vDҷt̢+ReeݙJ.^@2_Fx.4m nCZ%JgLXmHYG>Lp)4\wKwuꭜbpzjاm^Ye;2"`r-:Z[_Kυ܆ wv|cDGy"&@FYnVՖ=͏8S2uŐkPu3Ǻ>* p=pGAs!>]p.)uBw4}SKSlF.դMf}r x7ǜ0IHI!~wi"sgx'-ѳ@o0+eQmDWB9\?dHo_*4tJJcjbL.>'4no 9!Tx:-Tnb:KiRZ֪fz?^9-QP97'(w[ "RHF'S69k5)j Ͻp:~mnim7QZ54r<ކv|3FGzLRzΩ=b9t&8| +^mothwiA6l9x]ݖ~d#ܹj·]vаx nu_|wtmvnG\pTQpQrcPj!ںc$%:o2J jP\ BnCξ I⬜TG0mHx0dX 6!WNk1Зd :C$IE,G|rZ^H$s5/2vB2SܾqA~ǐwpt?gFFܣFV76;AM~7b5@5op-5Q6P*% )MV>B [$"_-m| q;/ BK xІ ee+X)Z?@'I/G㟿rvlz}Y yА5Yv+KmB4[&H] EKfLz- AĶ:=me_=6ςCb(cZw=3AA0+> .Q~-b* W3z+sHշg5R&d^q0{a^HHڢ4~LФ$X8*ѥ$ggH:(6B~MGBSC֌K B &YoI%ͽ<6 t !9G@ZxS=x:$RVǠR)lB)lE>P@Zlf*&MYI=m`;8z+⚙E&1a0*І0($ށa*nT 2:sˁ-{b2hj4فo}s$FIgM!'K߹BxQMǣAjAu*wJ,Jp*<2&,v[U:gvE*=#I!>9`HROŶL$]ԂoMPh*h(Wr^$=C5#G,]?OnHf}{KJoP+t{CHx_*x ȵnض}x;>\]MQU|29?$.0R> dl@mt HJ :yHryՏ`XzSBD*ι526*ۭ6τG} Ho/j *(k/vtm.u0Zl :2TZAۼPIp뤍G}=mkŵQ E0)! r:yoQqTI9K4RO IQAiŕ< H"Y c #OgߦB '}z- 9n59l~Pe*NLjk@~'{kn_2IE72.E-R(ʜ2«Jf*A, J 7W"cXe x6b)}\-3P+EDFm Db8d] 7 i//NQZ+D(8xsX,4ǸF8B@!oA5 +HNCwfNXRTȜ%# uad+pٹ#'Ɖ:%#tOkGLʦ#f12X,2 zO?doSԛA&avzӻњ;Y8>%_g4҄2Xڽ}/I?QD,[iv=ioG/즤b`gwd;cuJ)&)&բxYl$Szz;aveU|SDw9凲`g? Y>pdkOr..[˵!vN zEA1yS'gjci 7D={<" ̽vÔ!_\)aN >]Lg\āAZh9Ȏ Zqt6*Byt\=]1[L7|5)r"g%QfG6ْlLc[n%_jlgi 𿚠 e@4d7"WOK ]T57v $c9&-:F[4\RiUreҠݓׅlMɦm ^͑jsڜM$(9V.GA>/eb2I"KAc@B!e\is7*s4A aA`eJ:Y5ECU.Re:Mƨ|D M>}xO]Qk4"»CGqUE{:IЖ 6P"1f`< q𖰖juwXb2UVXGق(-q#a3OANY`}`([JXD5-vYGNI7]בٳLD$琽& Yό'!\3x=qh]vms ~Q%e왽Lhg@)(ʢBu@hgW_w?'w`[~H^=p Ӌ&R]i8O ;yՂ'R乩'=6 gPeyNDŽu tڠ+atL`RcX3<<)1Z:m\qOfZ'=at|6w}~_5у%}^f;|HB: Q1F? h/Z0}@$eIY|c=FyLS!NG04$FdYv}K:>]ȁ.y !7 b#R;:kV gqC}ixl͇#VJcuӎOKc؛+a>.Dv2-#4Z[~3o%O#$bADHLAgyw zVdDD*gI{R?A+U+dWTc^^ ,_Za?5,%ith\E|a/PFwCP"mKś=&p.FUoW4)h|;(b2(~G.^q&b/~q>SY@*g2z+cv9S|, P{wq}BgeFEU*_gJ*nAysPĕ&k $,) .~ekDhx5)SRF9vL? R.,qoIO]~t}X%z9\(4pSwT6F)Wځ!ㄹsp M8a)5rT}OBF)gILp9>8m5sZñ24WS.d|I3,HL098mmtɧښ_Fۄus*-`vVFv-#峼tN [q\rt[Z5U !9sԁ̽[k 8l :%nU Hf &Vxk:ڑ3ʰnsB)U7KjOS_Ztɀ*[aY}be4&I5A%6rDŽr5ɀ%.qXRY TL UֳB^H\[؃fmp#Kc}E B OQ0̓.9Gn\G{[׶9}4xr3=SǛc]h~Ye`#X0s;+66Ҝ?t#戴3V^_M(ϺSQư- |w}'rumBW ;)CF8M 9R1DŒo yH'G4_x3:5\;SN^>LFPmAdTyVb1̣LiIl$Bҹt,UӨ8`9>: ^ʔqN!F aͥA %kOg=i.5 SC0VzY=o^]x}5{rƆ9Ϥs9_)FgM^ nzٻW01AM!帶Q;ob~6H\QL\^z4VyKT(ElRX~- vMv(aN(P]P`p-JkXyN"qN HGh ưP8 Gb^u<.ȉVR1S@ل7(.]VɜbGlaV'DEGuKŀ5*Y &Έ fd* vgnx TJч1q11ᓌZkONqδ6bD;{F<NkGp]ׇ<{ӻj#toA;D@ ѫ{: _սZiGz?~۩=Ƌ^btvFp絝Y#$ӹvAtTx-A(ʀ1wb>˘aP) Sz~kn@~x6`璊 d"@Mp?} O+99 dӷ tSS̶ yH?aDhvs-XOD܃@r<կ r (oWW ]W4'N9|U䴳FFKldh@;$͌-Q*%&HRy9sBAEiwdzVǫ*yge֞4gzyٚīDB8( HcTculk6H<1` 8Z %#A"c?Ke\]vY};hQ~b\v$2F` } Ɯ$cqPAJSU6Y6٢|}za%UzpLk0w9pDL,IE)|Θ ,xGT+ꋭ8p)ŏc"LMaz8v[8u,+nT\,#_6p ˚PBޟb(߆E. |wj+>|7}682ss)&d&lߦɡ/aeUx[chKhfD'2 mMR.W^Ew] VL3`"HrZ$ >E#Bll5tt!{)J-E) 4li'T(u F9 QhnpaDĤp !&\SCLPpJaMG3,'ӖÀ[vO3IzFqRT<1Njx&kA. ἔ_07veOϢݳMk U2qt2(f@$[K'0&0)' k-9R1o;[ko_e Mrs-G0$9¾xmlIy  >xv04y9MjwdoD;PK$YYq|ёtйB[| 7R`}4V{KUۇ>I{/ύ[%zZEacѷN_uٗ?އQEP ˲7?ϓbx;N%9K i. fimdOK;l\ڊ]d%;kA+͝@nP^GxrdS?>kH. u_>k@n27W*wU٩X7ӗNJ1O|1FLeqYZ59j_Z.+YOJl=z`*h}OߖݟN35 ̆.r*妷'M2Y@vsF|s?܅~|?*%uA-8ov?[ed,Y0/';u`exchmDۆ^]dJM$5lܷW&ֶ dzq!6r򳖻"-Ϛn.7o )AS>l.QcseF{y>5=ZmwJm Z;pa?0Xu(]XU Dͪ!yS҆Ndu.,Z'߶c+;CQПv8 6e}+]A'le˽kб!zlgUYu\V2e䝧烣ǝRCYFUkϫRX\8RTJ'.AxsyUo#h{FI2HY4NZ-%Bu{w6~Pr.8*̱r> c'~;kH)EԒ)E 0״w}=@dKa'h>;bwa+Nvv e1F: lp6HF/g)Oc.*5X@U0ta0ez/|M1nO[v{7~*fC1/>Q1]x2§Tob@JwgOuyy4~m.Fɺ`EW|0?7GkH4+xbPyP:Hnѭ4/"5ͭXx%UvI^n,ar`XL̉R"K, eyϚ.@A^Y뷬 ?cF}U9g.FWkul YH)"+Ji14/MF)hm!c; 0!S&pZЫe)Șk^j O s`on8̷cKב HitL*"a6Q* kJb,A"A$ rPbrcP $ +ǰ?,F".(X-lLcY+ VIUy0@P"?{oe$KxK#v`b1,6hfRL!{`0>YkI iud(. P {"!P0Q`rv}4j+K8/F9~FdUrÅ{Uy.??RO*X?F";Vi Ǣ0#_|0v0ZaB,Nz{MHZ=,q2!1*>1auY5/#2$IJ:D(EbS>̋SOC ڻs\>qEI&ƅBϋ_(fSÝ`ނL~onCf{D~`Mwi`|@4[}ZvA$):K#KODpL}Ղ3;|=܄.-+a;ӠkB+rTat9Bt5]  VD+j1{lR`T6qTZ]dٻV.g׏jM_GhS(U XK1jW辖i8"w <.Vi"ih~Q~8j8/ΔIp::!NK݆_68x{d6pZ}f4{Es}hD#iBDJ8`0 0 yA{'cͽ[D;e%=\ktxyb2hMq4A@xc2 ig\T>w}n:x 6Du[:oNE;!IƠlckm'梇[_ ;?92,-yviow(]׮~E,bR%c6(SZ8SNC/j]}h/Ho3л#p>,VzV9>F▱,;~<3X2tie$XLjAem`Fz Gzh$*8ʝ7\ 3f4j|sWL#,8ZI}FD -jmI[鯝&hfZ%-ȶ?|PRPO+(8 nQ& ``a=yV*|7?UGblSrk R+?M)QudRtܕ,P* F4gDtYi2ބNnjSo[nP aٹ8o_505ê{@f俩.2bIB^I1HCE@̃}0r5l*$LSI3}Xra]T/Hnٲue59;t"7 h*/4)C0!B*[X1c2bphj4BZ"Nw݊x`bn3$5I~9N`.0 ,- T)`+%R >n4cﻓh>}Π7 泶o Ϋv0j`jl ]P_~^: %Z D*7I1 N9ťp93YVC#k5lmhnZX F̞gnqe? J3<: ERQk %I/P!`V0/2ZDJGt`ZD"r;a~CY1 3 r[R%d#6Hc!,cYƾ! #$$8XG!1z ",iP%LR#1s) kZfI˒%OI{փPnꪋ:(Շ<`WCy! aJA{?t+$!AXAyP KQO[y [(]fm7ٶ~:wNwVc&0^X':}U]rbj:Yn`by'?]]W @mf5 \Y• ö40̏yv~--?mO_?1<6ږx.?0"δ}#OR ~Ś_^zl.KyKYOWzr+,{RƁbۆ;,%BZj|Hu̖24H0z{*kv_/SY5G W&mJP"x{HRIG0RG,}T)EX;ͩu(joF.>Z :+h4vCP q쀜)&8 Zsg9"o?f2sz]֩d7JG惖)u:5Y*usz`!' 1Ϻl AIRw.)\ I@;O"ᔃPT [w EgAr^5tWۃPw{q5. h\jeUDj͔#EDM7{,ռFao9˳*wTV!186D]ACS$ h%cT9a q,cW$cRiF~7f{֪`P^(H`r(pQMrNNvJR%WRy,K!X"wF$"z~KMe$@XyHHE$5 `o'_>Qsu/ӢzMO]߱1VYoXhm 4wFm6*GT KIwP'w)&6yW-~9xV80;eqʄh8 ™<y;Zi^>MPbTGᅋ62ȰV+|>3xM܏En ގ\#߇ykEz IQI)%SjcIaLf*Ho >{z˼T潁Z݂]p8v/Kc!_vFh]|b%CX8q舄—41;TTU+VnfCU?HhLzQig>"4 ib7ˈ{IE[&QycQ.6TZ 8mycjUKz;wŏ~7+jeX0 Gw`t4Y)M7? D ow/?~@9`K]{o9*B]\H `w|:ȲNRIݯؒeYlj=@bKzXbx)\9D:ŗOڢ9x+{|UnA' 3pQbh<9Kp^k##Ka*VrUx9 ; մe.y`*Cb8km~OL,y]7t0)7Sn,{՟S9T5[{®!_6%GϿyamqta //6^ie7˗HPTx>>ޤwno<*k y=گ73|f|65pδ濅[vs./OQ`tY*7')7{ 0=Bn* ES4!oQS\nC"Z4ʄgT opUCed*U0'+u ru~dBq` w,-/5LXz[Յ)I=ʤv T~Y.t\o/AˣW-wJZn%^5Jȯ_Y8}_|$NL y s󱼿@];n0ʄh(5ӗ"8OYV{oJ| n܉GORof㣌'<52.zѳ/Q `@1{;HV{B]L~Fo R.Oq*:'tנɗ\ڠ Z\Z9#E&!yeDmԊhm@(@2f|!)[V葧EB==jr9SrmCyw~:;qxHxϒCNR"qG8j;@ې܀Ѐ S1YEdɣI$> TeMGmTšEjn 3&CUryTιNѡ$WH\֥ȲRX%k¥@T ,7Om]wl:C }JP?w2^>?'\\Il=߂ 115m=i܀BOCrnl1xE2u`l/*=S]9Ǿ;hܻy)/ bŠM~zz=e/!<ε}ۻExubOeK72*W⸋9Kd@\xxbod $ / =U\/@{x')gJf.63| .\șt[YͯmEn7uH?K&у۴gk*no:cw݀rE n][h?lPy@2~Зx*Ž7Cn 0xm>&a ^g钞, UN(#3GT2fGxJE!ғ&='3rL[7l(u[l@#$/hA ޯ1?؎n_7F/}W?ҧwvי)%aHܳKL4^:]pѧ$Xt>H  4;nѴLm6!$h8zZϽA+$;׭ (ds [!W;B^}pcF:/$y&ll&fhSVr3b]ԮԎ,SBIy rz 1^l:śA?|=)O 㶔%ήmN=F}%hC ZIjE{G&m|Ǘ# r,Md0HeG~2+AE قH?H`!Tf ޒq \$>?oۅ>ܳz.P!gc|Mr?Mםx\eY[KPK&CgOg sr!Tґ\,V(}ȰxzftV?]O5*<6Mc\JZe =rRyo9@"\]&.U^/tg>?'%]uaÞ83V0=įن2J[WLNxbi#5ao o4cϚ} snhzmzsֹ_:zg\`'7 bk>.EC:6M1{Jd+ L r<# Τs.v"k/8TR3*k~,;4 <@02 Pz4n0u啑w[}ke͇Aa5CxrȵWVq7xɚ/zy;Yǜ4?0ojWkqn*a`#on5G {a*`ΡD#EဌyPnE?G3/Qhws-T ,HLkB<.3KsQq ?ygLXRBËk[FW[t  ZR\ -QM9ɕI1HdlHbcl r*TMiLWp]eUuh{lWA߆ɕUe[yRu4PnO~~sVL385\9F-S?E#B[[+J9y|{4i?{ 1kA#ExцG ֶ3GJf:aǜ饋C/j ~>7jb!g2 %י_7E^}^W}t>Á놘 cŸA[n}δKLoz-wE/'e *0+gY*7)7GAn) -Sw3Zтo;QL2x\B.[]%sdVy>ܰ3 V Egk_X&Z^^WL;XPUBdUoRoRav,mKqY^7%)wI܆ t+K9xUfE[.=6'(3߱1 pݑ0W!=*G.r.%k|4/xc*1s@ɢ%19Ŝd^'ǡ}>͂ěy,TZQy%IE_E_iglnsH+@2Ԉ񄷐㤀owJgw)hړ # # 2ĜP qAФIgXq޶'፵ x彩nf{Î+]ޕ_6]Yuw݆s_^T3sjx '"\T{B4K05c%`>叙e'k~MD.k.oEzf}<ƻ'*$1FgX [bBa|Y), =t>J#lgNϝV;b!Ra(7! xaYl:.O P?Vy&@ϊ:/>(:iQ6w<Ot$,"^+-r- )K*K\ 3cJaVX05{dT1| ܊|Cmà'\`kOT9RgaICA݋\0Fur &0&v`6kų Kӛ!=;?5qP3菲mYosBv8g̕.g(<4bgGwaf̻7ώ4B%^>sgmK KK8n^yD?øR&Ef0:TѯQv%fK?N{b$8pz-m<(@ͣ߿FO{,g:0DH-wrTs?]I/0?%E?]P+O?׼ܾRm3_7${L ^vaJ2IzoJ2FM{C-GHY1XN(ۧ+0fgȁ}Js"-Su0glmN9Ѷ>ǿ.~[e/_:x:l20jo.5G3`%6d>Mﲃ׷ aٽY^}ͅO' tG a-ʂNEۜ5Ox_V8x}d]q ֫H͑9q4g2I9 vU( y)IO\Z E Ihq-vEs7hM O|A`eJ:Y5ECU.Re:%cT͂i2Jx(LxlSlAS UgbgM0;`c9`-Q Γ` s h v$kHD='dֈs,"'f,#[`rq"U<^%$VA*ɤ ,0VshQt,g>6ORx_Тm+Jv%{INWxv5w*NJ΄D'% xPB&IM2ibȁT $eL ^]}MȔ)kBvB["`TmN}Q+u (SĎ@])$ JN]yL`: fqN!FEÚKA6J CbpkEL01M!帶Q;ob.Že/Bva/fiuK^~ A.IR/.)BKEt&:&Jdx0c^u<Bs>rb@gL}8nGlaV;G*_ ^.Vdh 6@uF2ry2`C*R!bL\LL$ =)˯K`AU.H&C֎p-hk# 3|@.L7^6>G[_TL'vmʽq9<9E;gjr~}h3獤AX>) V0TpJ'D%hEY h5徫if0N ig Hc,|) :᭶&H\Ҟ-QR% Tc[:>Pj:%x+r<i|j뗶,,weaWT< xyioC6 ;lkE?QC%ʼn:<9"*dJ`WV,F~~7  &p,EiF]fQݏ7_?Ӽɲܺ͝'u=9_mQD5Ncp0u# `T",b@sfpR9*5.`^N.*(x pjwƃI̽6yn%jE0ȉDkoYYS9r "HG4?h'>rrF`Zs" 9oM~I,U$UۆU/i1>ۣ- #J;E{R3o9-w`^-mϋ嚐HR9YS!zP!GAĭA,!/?*kˎYӧ?Qfڟ<7~zؾA3g@{0fo9h>T^|@QTlIȍL>&7 09P'$iA8lP,*4R%ژ˞aBҨ-!abUƪ}"3`,bg$ Ӕ52JX\1y^BvUIvLIj ;PiE<+Quʸxeg]Y{M8S#'[y'JKEi!!+@\.8 B2< UQ*j 9mQlr QHNe %dTL42µWE#G y:oJ. )MX-͌Njv>t9T2Ω5cۛf nn`קvW-at<Fw5{.i7pݻF.(W_'Oa@yed7?LvߠyY!<PmM<'\9ec͗Xbww!_sҋPYLrls. SGG?-fY|Gg^#st8,Q%w⧈3Q#5"ʭGʒ.ؑk}.9|'{4Ȅ/kH{?Z\K7%LmK8]zA6r১aht{a|Q2Ma\?m^oӹY>೛z|ף=Gɭyzoba; ޘ^NCÏ"(KZ_z]+`ީJ+:R\ -Q9ɕI1HdlHbcl r*8j+#-?u*ʪ(k/f 6@6N t*ۊ΄ i=ۭ\ m(DQ$TOшPVw^z?{Ƒ\H~0\@.@_O1E2J~3CrD1"-Ӏ-鮮ǯ3DI`L\ZEsA  !e \1:E*LS8pt;,\9/ޏx t&RDSziapJZ ST/v#5z{!/ٔQkn7l8)yެ3o4hIu5&7ՍGô!X7 P#࿗:dJs)*߼ ?9\Iε_XfbҹVuXTۚk>3Af[Ƣc\,mEWկh֊lNT[֯RR-o˛^~`A BIĴX*IMKճl&-i~g[xTZpOՂg-=r.ͼz?fwSn@ }#ιRFUIi)z$ %c[6ُ#tE6Ni1uEIBIIV_&~D,.PS?0trzYC3m[f;b#LRj {)< +*VrNG=f^ޙtGϺs ;wy>8[ߦⶋX qs9J;#4 iɽ*UGwTriQF/K7X*NSكӗ6wi3>^lh͊L@/2g|2{Mr+3fk&tn~Y'fڹCye^KIR//Ť*0 ]vr I@sVrZpjq}t[[)Zb{j#J7`.OF.YNBBp d/7LkX$SmP2]>Y*vL׍vd'-wLZn~tp[nqfUC_ܔ7@ 1m L\xu!yĽ%1vg@a77c)o|EMG7ܜzi> ><} E3O&apcbfN3$eny%!?d :Ώ屮Qkw50v4 #)"+Ji14LMF)hTt =^Nb&y$Qc(, 1RccpZεA{:r0)@*q,_JzԮgXY n̻l̇d [Ԟ^%(ZSA+A 7V"SF9bxy n %+WFGy[kyGDv":&0(fEwZI%HD0 AP$@2}w{< {e_aa#48wAjaf΢m$ qC݉;qS6diBoiN8Z@,`%,B)4 sni1 vXK"@ )xh% :ǡL0=ykY@2Ūgڗ/?,+ʊ0yѿ&'mXụn9f /܍Գ@C}mtakXRj Z!ر23p#.ʻ$aÄ"YU?.}1_wW$ʩlŒG߫L) ̛=cGĂX?[H!&Wb?O)$0 obo}L*ƅo{@iKg\_kV50Taa?12%GMi*:d^o#OEjOWsjS(YV XKV~,LYɴ^%+qu"l0 ȯ~$˹3yy:.c\V#KvOf^wRy̙ RM$m-!ტ9r4gQZ#iBDJ8`0 0 yA{'c'4ۢKFGQ NRSU4&k l OFFt4 [܍:6 >>裯4I{SP!J 9W_N(8MၱkT V0R )ߩvqT n$%S{a( cͤIH8` y8-2},)R57Rej.գ#a:>]ݏ95K~5'tLS?&)r3"f!y sZD)3ln%9YOhH D aeHk-J9Ny@(1Hp\$ĭa9&r6ȹMXG1w9lAH!]ߙl8!Ѳ:&Hc|)mx}S ?}&3A)1e>;T {uBC{ 6oCGg_XT= t:/$ IUp;o01qV{tH}maӦ4GMXwu Sj~mhy] ٜR𿛁/ Y gI>]a6vP~jRv)$>кpi+NLLi]sI$ gEysnVdk:ZU d JȰ4˥;UJmsU&>(% UHP&99ѧzm鰥t$UnCyVSS+L봱/NObW/>lmٶIw{&&ņWl΄ 4>6]t$CQ*!@ݔc_9ͲEo`dJyF2rH ݫ06]t%!aVJ”IH9߶;EWK=L,ڼŒHb臀h;i1c5Cwpp3Y4eB<2rV*z:>LIr)'wpqؔRNʎRN&QkhX14jDR:"\l#js $L4t~OCSHچm5Ri9: qӗS9Yc=uu^|<-!>G湀A7q&Òw.7zٛ|˹V;4w5#qC ګl2VM=,psLr2e= Pۻ[h|7@Zaގ>ϹklE "Q'Z CLjDHWiaQ c5\yAS"*Sh&m^ex [z3}Rb^#o Pf~>*q2T.z9Ӭ׸ Jcٛg73MQ-%FW8Py>O {Y¿w ]uhV qZ.Vl+>++:aWv|ȭ;¥hxl}W`}>Ⱦ}msfaH@cD!9m5s4TإX[ 󀭤!@Fv(M 6Nva҈f>$hq^c2q 4 h<'BM0vf0).3 ¢K俩L=ٻgv|u8@_m[35$swB$8;B.U}XMi~d`97H+͂E)~?U» 6bw -QkbHqe N2 ^"5X8@~$}$K#vb2'^q(0bs Z l~$T~$tT2TL CI,QR]EA @U0RS 3$^ о$7JsʭM= PP) sCA)W1UX;LPĚha婵#"xM)'Dbku`L0SNvz(( x$H, (opjX$"﵌FaQY!- v6+Ҧ>ܕi$D.wD)Ubjk {ޢùcuUr5/|g:얮k+ܾxZiõ$ơ#1yIW+T9 N'Iu9:Zo9,t:o_nf!̡e^skn7Aϋ;t-7C[90AV_ұ7Ẫ]moɑ+~J;RWn$W1E$e[_E!)fp8SUTuU={{*Oݕc9̪з# mV*f!'V,{nmFŦ s f8e0|O4 14Q>T%^9/\1D%ܳ݋:#V d$R%gэ\`pq"JI.`S"(.l{Lo :' bԚHq:BI10Z.6Mj8uc|hmрSΛ19qܷ_̄D%#^y5A9~ 9i5Ej,"\ᥣ1I:M.sp> jPs-o B]eӒ(Q9Ou>S94c LC6M2Pz򫟫!P)E"!ĥO*fό3W6ɃϷ:vF:}B׾pN\kւp}0 &)|,ʃV^J$+p""ڄ9iSyT6LSyaD qL!(UJJ.C̕6 3/c FL`hX08DT0\0r>EАh|ΎsȾ<~ <pfxng^Cu3|W߇>L}y\?Fߥ6{&wު`%'Nʼn).x h!4ѡ Bc䴳\J1oDG#$K3IEJ]o@s6Ojr]~$I̒f,OY2mhSF.uC:ۈ̺}?fB'.3s$#45̉yo5_=يh?[*;h;BV~_!L쿎;C0Ǖ^@n;yg 8R*UnR5l*@%<ɨRTkoyY8 &w-sTR$LϒBSCA]rUѾ{\l8U4ގjV"7*:Jͣ3Nsk+Ʉ&3/+W"B/jeTR9ۯ)C┡^{-!$X1蔄UR*MhJ[b!x#J9-,&B]><!M?ܱs8͡?=s,v>'#̮ pg٢q^\Hm& x%|`E;`0Q KRNK*B@LG@e x"^!J@%օ$I-a/—: єBGI hRV$"(G p Tܬ5ljZiԴְ(ESpa V6eE0'JL,qK^{4Ȋ rD*WUT3[)JOfŬɊg\yx\d}&%iNXT*.LL\D!E7(ĕA@L$А>f!9S21xΒ\ņIM Ό"Pʮ>vL@_畦}'Tهx0PIP(w&\+' $ʙfZF  cU2;Co߂2{һaCh-U@1(a5:6kQ씡߶͉Ez Fi3~:~ ?!<>-PVS */7Q'7}B.>d0?U}EOd>rnE^m_FX5oՙ'ʏ&o6G3\b >{V9 ,ږqYg'~0c&n9Yxkco^=p"0S!JpFG:ab(wlXB ~ҹl4;C*+&h'&Jsn;bqOd_Ǟ}ܑ}~WNYXA@'-}-'NsW7/gB@ Vߺ\XUZբ(kصkt@b"!-!AiyϯWTR;!o'%VB$XZWBYm LqS֢i[zkIYDRO IQ AJRb$,㄀1ʲ x8;Tom׆ ٜQO3c9t6 [EcN<sr@)#0S PYL.*I {NCK3n-~2v2񕑄02˕a\@OfIs=1\,}Ņj?C-}qtm?vƱqPMo5O z,Gֹ*8A%9Y^'B/==B^)IE,orF Q?ŷ׍8&H4M'tm3οnn7ydB<~zcjdp"ޯ Ia4_] -()Wvg9eHчRRrpܗNc0۲vp.zy%([9 ~NšM6X8JxZqxĭHX-FAgpe6X1D9u?k^JًcV %PA:|5nu}ՏoK[gIlQ]$=5ߒ%=Eo ~"ֻ_SGنЦS?}8rY"옋%b}:RxL glA~р-)Xu])@Փ#D6D9A)ۜ 7s/֌ gf܍RN ;[OՅб.҅GՅgT!.4IglYd4v'/ 9(> &CRT&F.قWrY9Sb.,=&[sj[D]HX@sd2[Y4" QBẆTtb-l3&aܗ$c$coHpƌUVb,*Q "lVfPVzUa@]ؖ44cJګ#E f*p g5;jjbpӄGEHBdHFFB05I)KI@q/r&pFpZ}=,A_B9RF 5+ ay.%Wm)G 8p ޺7o`,8mtz uwn;6Oezm}B~{bɉغͬq:=.쩸g[6r@9];ZbrzF;[\ݝ/ṗ='?꥕ dI<v\O7k1;ittDtwzҩUA\׸)lL;#uJ+}LeS2 m*PMZEV\Xo_iMOגr.(c⬤30Ph'D9 Ug9 h^8z4{*rZ,;pElhOyCt9 6*Z#GZkEASIu[S-}o\R4+P;$R HTSTv7ĬCQQ}ko@EA3389k ( 91`ʼn|щ[BKr5+-?FڍKlkK1exEXLv˚g$sh|fZ&k!<H2vF26α&r;}uހ7;ȓyO0}̻q;Y|9o]ģ5Vjc Hʾ_踝b)&LcQ1<$`z!C(1WI5Js)ATc"MEJX@!$%2Y'0D9$ 1ױǤ:#0~} qp羸ۇ_x!7ؘ38kGUb&y-)X((E+dNفR}5H"Dݲ'Fu C#sǛt] Q@ɞ[+Fz|8 q /یW+_+"{M1 nWey"jhuR_ qqr>1O k\F,<2Qˌ{^dV )ad*@Sn<4"M- JٕtZhW:ml iiWWKieoVv@SM73a'4-ReK-7/VܸPD|1L$5Bl$&B QuT٩[U)) K_C흰( 鈋2ـf@` 7A3b.-ٲ+ Fx-4^̟ "R`ɞ&R@{{(yߒ}s"FksW RRwt:҅XŌoxQ6\ sT1d|IpvI>*Nwf0 q-b/@jc C Ig!w7y\#4gq:pZU/ϠvC cEVןm|.gƦ胾2n *0+,h>hN. ,-  ]6֊O}Dv~\\$sd``N0,k-ruqb"xaI8 DoL[WNJ Q[jװpQZAvvL=/뽐;'-'M8-WfV1*UU*578#iAeb& ?yc{O; u'], u@GҼrՂ4hTziΰM:Lx=֨l̫qok[YAtN}=}͢.vo<#啡 8Ox;rt asQBI(94D@TsI^Dd-2h;c (1钲)pcAk>OoԾUm-~q/mJ_ xu%saҸ4.kBzM' wsҖ9o`RQ5t^BSD,5jDuvByM!oe0]+^–Ze/C:#JXH,[RTB* H%Qyǰ2U\%л^gMnWV}V_w:qK-//x45MBwt;ťK 9nݷ2XiOSUՍ ^R_K~~Ae} bUEZ5^$^Iށ :G8ˆ`QI3͕51*ӻ{9g%mQ:pW%^RbCJZJR rfO~7{W׏I=Db&$Eut /zi =xa'EhmxŜTp2a P/K.I׆ |Ϟ3O3cU8ir:h.kS@wyHB~8("H`\$8mᴡb^@`m;̔L;*W U yȫ@^Uh^Ɍv G>"v}b'v}BZG@ZE4#v}[ĮOĮOĮOĮOĮOoQ2>>>K8> >qP>``>P>>듖#v}b'v}b'v}b'v}b'EĮOĮOĮOĮOjZybW68BƩh˄k< ?霷3D;J>XgEtԖ$'AOYᅷ&ARjR JPg:0u%0!|ycz2"1>g.KRY8%MB;[Hq| TгԦgPɞVu$3?#yu^\^]woQ-K^[^ t]3,e y(Q2TX+2%eRW nx󕵎4qp3AWN_k^^Pʮ -U"I>Kds݄uJQ|vVksL]Tׄy d x̤ `NˬO2t)@ѩ6h钆m+xjN:AĻ>1mXf߽)=۶CP@I}7AvwBۡ9f1{5sk؍??mĶ} 6s):,EYmS:eȱFZ QJ2=Ҕ1x6rP e,H K9 Y |rvsZx0-{`-,ey& Fs-˯#}5х43h¨o; gt-›K'rvhd .h[ ەA9ni^9,-1 Zo\$'O rV卙g'ys*^J ]!gkB9E:& 1z *"%%PM.*ۢݟCh{I쒋T= d@zO6`M 4Nh2Fk%"< WS65nd-HR|Æш -#@j]rrvh56W_uu5+یN{G_U -ub&W_W~{;f{'O~5KRyzGi0{P*.deQ()Auu$.~ShѕJ)KEO&$4.U 9a)\NVZ4 ֒r,ފR" QơBdQeӊMVpwDŽC76 ?H&Y i:FUh:PsCgP.Vf8[޵6ȁZpX1h$90@LcfJYUY!YF v!bAمЂ^/xG)u`k#huR'TF)BJ31Wtp4lcN8mcrڻ#0EIsZ?׬g/vZU8bY!L0Κ#Tq~ܵHNtj.D}պNzk=Ľ C"&RS섊Ģr1l,(׵]R}1h Hiт2μ7|$ ӂ~ΊNjvs-P".m@_r= O]gϼ R1F+4 Ocy(sԑ+/=Ox}+vyISY ,g1{md{}JW\߲1O+(0kqY FuDrTz(y$#tiFA;v$#~d E2dr*5mƐ&YyٺIg"Pm0/ATÇ2˘S=I6 IzDNVFz|&/@ -=Kwv8δ/Zo4`j ԗ湒 x)~P)7X%!=׻5}[o t.{43еlu&}4\};6[,;Z^ZJIk|QɛO' ] @AbJZbIBQ7Flhz(m)%p$S5L>9q`&P( DZ2P͖e:{Cm_-r _{v#'2S>NԗteW!&"Y6FV@:hRj`6;Gowt5>juCe!;Baiܼ@/̛5vocXmIfj_ 5Npe@#;:p)t)t£{7MċKtc܉S3Ш=\_WߧgW?1CoJ&})~o }Y|88Z3ۂ]WFY6n6qfD>k*Bڎad7\$ 2MYu#vo^QMiI Jזu%ԱJ᝟yu^ޯmҢگf57_Yۇ!/Cҳ!/ NPPF~[2`86uK[t" ssXi = ~*մ#E5k!E]$YbƑu12b Ӣ1{B 0+J-# -y*2QX?{Fܿ%GwW0&gX8 Y3ErIʶ仧zf̗Hydž-qSSUꮪ,Wr/s94꼗#&Gzro1cYY p:߄N@|u84{bɳtdmA ANƳQYfsß*Ń&`S_v)+@WJY /3 MiB,6r5K~*W$}QSͪGArmj}q;̮}b9g'w9?FYMDϊ? YN9"=%DD/ߚ^nq+BUrl BL~y~zcjdp"?dٛ4ۍ0dcrq ;7pS6?Ah ZOHtrR{!'W:_EnÉ +}z"gSP~22V_e=RQUrl0ͼG߆:4sA9gUd_˭r^2mog+\[a1lN|:"(~dUy].7_md~Y>7'1{*VV$3gf4G(9I1sF,pxbpW*qƸ$t<hH*D^th͢9)+zљ|AJrj6\'HwM0LyXN fѴI:s;:Oxv<u#r 00 y\RuK)fmT(D*( tl锐"udMVm098Uj&j/R+HRtPZq%)Ƅ$ph-*N:ZIG ;I||?gCP>a>sFUzp}c&=_ۭ8޸8v4Q?evX HU2^ K\c%IS8a:HH!<$z "8XJH ?#(4%xN@$]8 ¥si=76tY$B0O5HL+Mxj;f5rvCZ сC6i³O)p&w"~ωf hh~,/UXt!My>АLe"S:Ld t }We_5!6*Jbȝ "D锨\̏*ɍS S0[joBm^! {v Y_/! dRU+|N)rz!t aմ1. N2A38Y#U:郶ɃuKOsƛddF{QTkg# . Z9Zq.is7An:`ܮ/'8[l]#oG1O1%gZ'2|l:p6<9F4yJCI[?Qf3/zP3uFLumd$`Ym! \s8QH4ᲬM:$pJ@xDVP) c|qP(I!F@zNG@(Hs[#g7^GGfe&D-vlM ) 9JF X^o!9IKjGXbuUΉԀᥣ1$ ADr磠EI{4 ԇ Xy s\Hx0dX 6@uFT,W]^s1J딢b' L3HA(~3I9|N!kh^1k"t3sl?׸5loF8UVi.iH&p""ڄ8v nHNEKъ0ah8Z|h&*%%!JLJHʙұpzL`hX0-D\@]ƒ6\0hDUh?S #eIk:'"0}'ʽb5snP6Ṷn>Bl$3[D4$€fKxڋ@K\vegPv7I1d87t^ۏwc1il'}zW[1xpu6F%18A :gDB0:Q 7F:Sms=/sY5c A;-T03^ᥠ 4`#'#8(]s8 BHGBSCAC]rA"5v-jRK$:?յZz#&wU ;5,Z}} O.[?esoGOU霵wKI8ӪV(TgY@^N:rd"|HF&}2!Dɀ k[x"跪Br$*W䔡&V{zp[n4 +hָPmZڦmCU%$h"h \ϣF5"E^!.$H>mq6hT'cݗ1ʁSV:a%#Vr-osQui)T'Ow0. \ڼ *)s6b΅(`*sΙhT׭zRv<ҺxyigF2f޿PȭMGc"˱@h{׹׏0@2Om.Kl]?.ۻNwth݌٬:Ė%;Pv~γ=9Res#Жcއ'M;:n-mGǿ2-9y!w;/ֹlFW@mtOۜ^O@ "xʽ#x[&e0sQzCJAFIVjA%ǻzG2ґH7f4cG2d\Pk@I )='L_~6`Se96JWYN+DV.⾸5UaeXm^=3xR\ٓGxOhV7 (`cYRZ2V֣c)Po>n3.};F}YǸox_Eaa\qcзWCd^.j]Gp' FѓIkS+!l2G맇2.!A7aSR9o u1Sc5TŮZ{+1qMnnѻRTG/XEK!>G.hݢ/n'izwouJ..T`ۇY?NϨGp'O&E<߼N m)9֨ؐJF)x&~ K( ;=.n4 5AKCpY)8 RY V>%sV^!oy( jχ3?Ԧ~~ۢz|7 ;! j8FBQ^2pag3/C?@&g8]dj&~4 )͆-*=ͫUnq~zěiqtߍ>K'ˉ|OV(Z4śqŶȦipR"RY(7x?G݂"(r=2ڜFyI2+%,YVR;GOrȜE\]6#9RGbVW+u: U&}\ 4M7AŴ'{).S` \s0#FMϞe ޛ+M~Jgw5yce<]:\)r &_&vPNg>V(%$]Ih>RˍZogwx2pf8`P?x{"eׂ]wt VBqh Yqe7 cͪz@lyFs(\dDr_ rx 0|ܬ?'s2[~#ne ogeRNƙ9+L*rrA^9?cxOFw.VyȂ]!~+bROI#"$n3Vr4IjJÈAiÜuB$)N!sbI&u.08˅,b"i>;޸ B~fE`/A_MCZy̐Js4ٴH`زvtWWէn[ZRo쇹kWj5uQSQ=I.YZj='QBmU0n#֢iQ_$Ւ/nmYiEJ6omҤ&Dc{]&Tg1э])'И6:{攦"Z䔄;z&lQ7[]eY=ZǬ*(97M4nֽqN4dEoc9eNJ} w:ƾU2pę,(-ю4vġ{xNyG@Smh5ST:8S׾"iEJ3Hɽ H4ЖBv=5ڐ]jy;"4#H2](_ ^26XAiE@m{UPQAmxZC:3~U& 퇺A5V-T|պ*5%.dp/M,tw Յ%c1g:4dz$>;b}wc#h'-ljUSAJJ#LeÜ ʄWs424 ( ꛐIv4dL'BXl*!,+u z-V(!Ȯ؁pYV ++5 e ºom@Da(G!$B Lh=iF:ݍE"3|OAݚxb" ̈́#Bl `L1R"%8TRp MSO Ų? f̤cB8*VZ hM e*3[!(.0̑EUWfhZëqCUDIAPDVKDgd0[QUjkIk ˢDpPZ46A jE/A_Ÿ d-Š.kI|ӗU.4n7-Ѹ ȓd}E 2FMB&.z(;KSvC /ѡB-c`bAv5t3EcJɡYf}Jh v% Aڱ- %DWHPl5C.kHhjF,z,pZ{65^ PPIȚ\\"34H@d(&J+1A2BUCEdU5\J0 CeXYv wMϲ] ˉ6BkB LJ!ڲhF& h%LD[vj譊& E7^c!-AZ6H*tA6 W\M-)K =UK F^^3vg0O Ǵ듳U*d U;hfU:9hѓХE4I04Di ͚g)DzI(ΓDCkBm1&S:Mhް[#cQD{=tP"˕,.TP=ʠAUbFz Def=*-EcW,B?v3a+Bq&SK:)H 9,RI [0j`R'JQYTPcDMnƢb$ӽY;:HՑ"ZqO+IUE :ƀ6mӺ`fa=f=ؤ=Cɗ Ug&dd#Y6UKюCpsrecB]hDlz;k*>@  ZS0²Mf@ beR\HO34%7aFD5>d8NCkOVQ!,iJ M\ѓF!*MknzuC}6ՔFӥ!`PrAv#jIB >$#PbI WKhI҈5uhx;S9ۥ^ϫ~v7]lnsmJ('ct""τZP;[lv7!zdUj2)'ѧQl"7+t+v۠y[n$;\oqnM.~w\5.H;6-j+y Cc{ 1{ 1{ 1{ 1{ 1{ 1{ 1{ 1{ 1{ 1{ 1{ 1ޓ{E{0 ܝ;k`eT } #87۩g.vΞ>>~sδF{jA*h>l],M1Fk^tIg0ᤴTU'}l")$O嗞XfBCZN=0'jj[ki:e֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͙5g֜Ysf͟5KbaŰd /5'kwPgʚ3k‰'cqzw8K#S`1@NsQ$&MLqSWo} ??{WrVZ>GO..WݺWGXo؊~W=H=G9m hKW/U]IQ6c5aLwT.Eh0QHgu^% }vdS7 k{u _]o1I%D4{VG+%V<:m4ݿp~7߭~X_vNsOq{X9G0͇?ߧ[4m;޸Ta?n}i z+\$Y`&?zq:.=q;@m3;G8 ^ G'į6n+ )g ܮkW!'ط>e?-㏴e,jU4@RGt{<>uB Q7G9zәagS#˧ϯ'J?}?ٲ3b՗҅/r_Jy}ͧnmw{W;n5X|0~#3L?33L?33L?33L?33L?33L?33L?33L?33L?33L?33L?33L?3OL0`ѫVwaoKkuf8AXÙޅKs\NGwA)R'r1y µlBV[}yӰ>;=# a?}-t;kKʣK  ntZz٠- otk42䡀IƁ0st!;;=)?g pPw KN V"۷zr~/Ac~CcEЃG|qv%bR[i"Dz'4~5jXfrXfceA<>bQ*DGUUWV[#,[(TiYd5Ji 舡Y<jvip6z^c by{+ҺUE@kKT Ru4{Fc켕9+O1>=VE+Kj]90|II;`^Ύ0#)~w$'Xw$w$DyӋ^ ^j|?ġʩ[&cBARǿZnO[iJ{3YV.P]J՘bQH !vZcjŘtbŚҭ 2 GNJUY>Ǚ̹{Yq|~vvї欮{7>0PSa Da^0~]SzǔꦠIJiSy'oF"D2d|1SUN'#ƼqA^'G'rٻ6$W ~ =X`1F#Ocdey1}#Ix))VldTV_D19{=]?u8}NW`V4y>nM`3y۟1o<50 "Gӛ.핻 o.ͺv~,]@ ~80|%y1#F88,uA'Jb.,<*Tkh^7G+o6rhMO10NvTyZe$|IZ"n)K0hQt4TIo_kxrZ=;OnټeNf~5HP=,(4V+bRQԛ^f#: (FP^2`ե訧uB1֕ u%HgX9&=^RĚh# Z5MS_.<%l6߲qS\vxUoh3gϖbh\^E`c."ӐZE\-g㕪3e$_>`F jiT2L'LCLtI_|NyN>7';~-tobGFA`$V .sG]GiTqʧ鴌9etҳIo9f-LG3p=ۛOFm =!׆;0uTpu txDxk$xv:)gnp=WykܧiFƘct< `V{ɱ+QK-9+sÃ&̮;#)1w9Kk^ _Œފ9S\%2*C̦%: lp6HFjhJx-iUvեRL庴ҹCz7 sT}_)t0)|V )KԿUUQ,sUv{`8hIaM{]2헍De#/1= cej7515tŔvDI5imwxso-#?u^`)nfG\ݟ|Ec9'r 5aT i-ݔ뵯_VQۥBBiLi)1H_w`%`k4̏` ]h QR2Qpi<^A NS+E*L<1ahp8%b"pc1g;Ja89+"9۳"@'Ss[-|nI|.g]8v̶~TbWKY|+2F)"Vy)r.Ɣ(°DJ QpO76{ݩnJoidKfS 2@[%ެs3/T%GN's Lq\\ɤ7ΔH+͂E)3oJ]]|A9"j\ooB\W7/@{.im1I?f%9;x:pE7rOg7C cl,ѧ^5Sa9^{m.gV:߫,Nn:9K9Zg2흀 |溽A˟'M2iU+i(urhu"G\\V5 f'Ϻ>`uBBd^^6L-,av)F^&57lZNd&kC%n};9ix x-뎩wSWU_>N^avↃ|,>(ryu澨Gƽύ#tnm9/x ocJltMov!;bo u"":סk16lxSҺw& oI z֢AqWhHfmp[4L()^sg|ǺKa65K1VatEHXEf˪#CCc0HO>R6Lc 1h/y<`%4z=-~cn?=kzuk|dMh7~1uⷛ&MI5 ?]W"qI҃Iڑ]a^zJN[RLݕL½?]M`B_{@0iе` atH\\E?V#0VLyh ƈo·ZƗN%@2>h^}Vv]iT6J!^ڠLi,tH i\HJ":HbPXI/cͤIx$+N:Cci_.Ql)pL i+7[ In֝kg}|s[lj;tt"tĩŁ`u졹^Jx}x.[ fK.%]Jj~) XJT.lގR .H]%uURP+ ]_w +-,堫D8Zz*QɺG]o=BDcZ->ޣ\|, +Mt&M,pzh}1NN7 /1\ zC W %gӇ_>fJUEjz?j=ޏ砦jtj[AYQzA KVʨ\j1箮v +"Eh\J䪋QW@-FUrbu +ͥxTUӇ_/~kU,Jm?EbQ(פdұrK4TYxW,d*%X)gB2؀R-`?" rtДcbeY'@ @ C˪ȔxjlK_ׂaU7&Rz&tn/|/MxG35p^ro{e.0sp~ޔJ;S"4 i̚G-EGdQ*hjS*=*2+4xk-wQ{ Z$^t-7H$6Qb 8WۤoxK*([LKgV3gV 5luͱ- `;w"Ljd6人!U?W!$`6Y2Y4eB<2rg9ab80 yF(da/ y+mdWŏ pHp܇Y$mݑeZx~}Ų2eɞ`flkX GepU:q.pk(,-rN;!KLQ{_i扆>9D m'v ;:6rs@=^ ؀B9! vi.8fՆ+M # 80hM@bzf<Ѥ08Ia9#pfIabvZh$-ňdYidrbAq,[QpP/"#gXdQ8/B]Pc,ǔ|Rt;e ^/V#g7^wECtw91lPt) rQAs:,fh$0כ\R4+H;(Qi $O1" F\q&E)ޯAA3y* 챴")L/"6{Hg ߒʃ媟v2F`Xs˜!e1䄗X&$UJ`i2K⇽}OB~^=נE>xp#;$-*=E^UYUg_Y4 2%,pO/n=mnWX zF͗I7 56*@ NN3O9R 3'*{;-g 8D'^B \ekRA$T*T %8% CKdN*a2Fɓe@*sIO~(+]'sS=l;]3yos%e ћd`!gaV)&J)#[#A:ƲǺǙ xg8KI@r=xC! skh'|Z8n7Ŭ[s߶6] l_NsF/OYk}ePwp6M"kHw+t]y/,w}tvVuL =9]qMɧdt w)Š }+,Fg39r TKt06M>ֻBaVPB_S&\[e? ShUs73P+Qy#M` ˺NnR: ܐPdIg姘@IrP 2sk[DkC&40.›ɓ& l]%, mFF`*)JumoNE<9R-gةD9klUU;gw@mDs1fХeLZǨJFdfdjF6ǘC`3YrA'! h "!gm9gRR5c5rf؊RN qơ*BuQuႢrmo%Ny]owbg͟.hpxc8h'\c(Z[RQ^1H-) 18ڦP(J%pm8f3 2+ Q0Mi,Uެpfr,Kʙ)3P_A-rvkp7Khjqvv`rH C6)IbHi6B%)6H%oPa86c}yȄ "/ek"jD#(zud$Y_9aԗ F#5du5l^#qǿvGU3F$0AI\FD.z&ĀʭLHPVzFȕ5b5rvkĕ"\%EQY/^/zU]]A(6jIһ8X=<}ַ;˫O;Mzjv7FaA̍7R7 .\2rZ;a}ai-%mî>,R!Ĥ&E'K(! VAvbr^ªj/wv{t:3;YJZ{_RJmߺK"b.o;Zfzt;GMkA~7 ieC+1v{x.{켴r;?ڮ^x^OsgIŠGt[2-Xs~6~Z-iyҼ뱶}iI\s}Ya\%r= 00pKo,Icbghb*+r@#9˧g餻sscGrtB8KV0ڂ Nk)qB26C Q _,e/\,=x9$g2Bj"8D啎qZ̵kI3r1ρ<~Vc17󷄩=LSv<R*m%;@8#lc\o~ifMBL\̺h5hY^''=اF%w { y{y/8GB(gU~\4iDIDeL^>"Vce<|Ɍ.O?nJ{Hu"He$3:Mщ)e0ʺ)ehwq>F0|(=8j-#->~aUQVvl׃Ѕ_;]ƶGZ'BZWOW__CVvCM)|c8~JNll-t泿_}eGPy6$Iȁl k@L$襀4(WB[MEc@8RhF{3#pV)+QT>fFcF҉RNDFߛy=:dP+~Os-}V|߮;aMV<@HLm 7 2օ~7A3#q1dY-腷G ^ͮZ#lyɎT^Tv:7OPf S7\{eՁ`x?M g>дP)@S͸z8R%°w&埓IY¤@'FpcoZgWn_q$VqnE]_{2^g/@Xxr/F!3/yd|G{TyFNsv~2_=ˠ:cڅÿv^4=f\7~QfIrYB/) # Ȫg6w9(O{K8 ohqw][['ZbwN/x@w/o!%sd sų%t\]Xzxcx 2񌗟 ,av)EԋLjm< ۟Jֶ+2=[| zcrVBས;'DvLuU7LrcZ›#03G;E>]]/ꑯtG&iֹunt ګњv,Mf]qy={ηx1ԅJKƺQP%;$nx3hʇAQmp<8 KB$)4 e9$"J6d j]V1) R}S83yil8CG}ʣEav;vx Dyu_t -S;yG_=6՞`BX&\5`QQ9̞w"A7]Ѹ4]{g EAOYFARjRf JHg:13u90! |ycd)'LgO TnC Ihgɸh`jЦgT^זօ6dRO{Rn:o2QߴXox,On|#rf :1`I@kIBkdT}LG2G|Z+V`江܍!HQX%w" ,Q-pU4:[IHUfwĽT;* (M *,ºsG<7[i iS{%Wג#?7 {? #L>d dh;Ɔ@L BT` 0h ,8^ !=ɠ$#cta^'Ү5% KDM09ׇ{\K@MW~߶qD.4M; cؤ^Vsݴ+|_3Kۏҏ?P~H0Xֿ]QEJB2YZ}.|%Xwi2.`qiwaʒuqr\]|\bרa;*foV6iXGXۨK:M.~t-(RND_,<~)j=D7zr"xVˁ'"ίj6ƇHٵkD/t~x1TVN?D~iѻ*1*z3$SWDڲ@^ 1˖]^[#%x6݆e;!3w{7 K^H"c)x+&hBZOfL.ȍt#׌*z"&Spy0G#LpLE{̲>mɰ-ǪNK.XU %Yn׊z*x՟xH=kcreKq`FsCp԰&E.j}UIO?̼\? {ə e$9~s4uA~d1Y ! O2T1\DPhݢ9i:uљ'>`4.ђ;g]sr&0h>恌h\>׉>UC580ၙ f^>LoO0X`Ɨ N`;Z}:zRc ^'r<@MH`VŵoV8lѕq ho\dQ^l8hFsY͔?{WƑ 1<"/~صl;SMM$<ţyT&Y̊̌"2r`Eq:H>)NÜ C92?.-]!apw^.%7:::ҀVf+\ R]hAfT6w_Oav  q5z$j? E ' g<ȹXcIJͬssG7?>ƿ#wur-Rni)epJL09uŁX-2jM<T&3mޖ_Z]WPu8{ؿ9jy;oOO6Iлy?䶶Wz[Fg3~s,t~isq@Ͼ\Z~T3Yc1afmƪY٫GYOqg, \>r 4X pB2`xBX85#BA1|W'LCƯ~w%n=~ G?CƗ"Jqsx`:jY7ϿO;Gw)kV^F iiE)PYJW.{ًlci-$\K/|Rs){ΜLh+B&aa$|7e?J3\%yU;eP-8eN~گSWJalg7U92N˖NRjgc%)sC*CMT\yik؆GPJB| &V81\s'AD*9KR <PՆw<2qH$s07͋oC~zݸK!4iI._6\[UazWson@\ď:vv ڒKDmV|o[=W'Y*,ۂ?gN&Oxl;ޮIbC>f77@hH:=DR*oݠ3A Cc# 8 Q*5"VTޱ^-bm_J d~E]4hUI^hbŪμz2JޫO 儏Rܧ+f݉|כ~E,ebꂂLz߷ c>euH?˶ 0x*B4+muxRwvɭ{%JRܳ#WZrϢd%<*%CgXrzF@ u 6b ueUN]BuʼnB^B 2L^?URN]FuE`A=u3T<3_oFvmf5i#rժЮu"Њ&,) ozȯWy <+iL&F h#HLS= b,%'V%xZ*/ChZ|n*y|q =Oe\0M]$kA~׹| M͏vf .Tw=]GuףQ5zTwݨ]]N]GuףQzTw=zTw=]Guףr0gva|Fp zTw=]GuףQ~HfWhvl޼~<>mFO9n d=g@i_,)h>}\䘑lTsU|= KaGK7gsVDxZtwP$PO ;pz]oHdgX4ph!0*`*0*MDTAuq bgAD2T D& QRx2*ELCgK\ AXk:q.@.; L*$ <#:Ϊt 0%v#g#5t@C'P ̞-Nie^ۻHg|b (z<bb4ZYie.03(aY1yY*o'=q`:yP~ Vb Vi$8X@ F'4%8B- B3Upc-tNjlS:j9mZlz߼um;Ugr'S]v4f|[oѰ7L6V _A2&|?t0H7yNNEKy( #Xd0e RQR&) rA8=+0ǃbLGP &&VHǢMFi3qW fIӂӮ';mͷnP2ΟYy/ RPowvV+8Q*8Jqb 0Z1C4@S2\sCHb,PT"9P*R :᭶&H]Ҟ-U1%(T[X[⬯@s{)tW88`SF,OeY/J&X_NqLRe{qL%늃Ѧ#9)^ uù5 *UM;8t*Fi&ruPHY"!(OڄV8F8N4ۺKK Pܨ wRBYYނ_|H S(xM-x96yB|ȉTkoyYX UDHG44_y#8ŸZ$RDҩ-F;N*Ivˎp+SO, .Qgowp6RWP <9m {<#ee4]|)!EiqF3F^N*rd" 1>LdBi._T3JR9e)CZcƠSbTrKPI֌٭b.ԅutQuኢ<|qe¤/S^*;+,pL*1h-" Lgh*$Eex5Oː?1βRy NBP#$&DG"gvah碵qǾZڦ,thDOTW"M [;3!+P&Fy߳^3!J Δ2E҄g2tεN G$,`_{k$+P(I!F hą10Zd)rZ>Nsx4/ {FH گ3%}3 u/y+mWۄj66 B4՗ΛA#M!2+(Fȑe^z5< X1x~-h,q.DQ ]P`H(bXO!9I|"T # ,1 D(ݔ-& tM|jP|WM|yCjB7!eI%jA"USTEǜf:G@4*Y &Έt^T-W5cT/CߝF%c6R)E"!ąO2fyN3\^Hsddl}N)dUFF&[A;D@=Zз}_՟?T Fl2NAUEV'2J@FS O;>w-H}N`Uz{sŹdTKRz1}zؒmNRIT$P{IG.Ryi$U5ϐmp! e I` -J{2‘8C@[$Gk/O*mgm"41=>]ь7M{FA:/?>^]ڱ my8\Nb~f} F7@εet$tToDkV^R_YǓ#]ކii ` ucZw8vZ}jW8t{]kw??a. :AuQA`U9Zj_W~ 83 8mSHIP6"hPߊeзU2m.$< :ғ EŢԘ))(eHи9'ZO2&2&_w쾿ڭ|j݂=M%zSS~=ёN<)} CijX2V8m`g]}˫G~ȁZ lb6XF6f 1oA!kl_=c6~m SFe'sJZ )t;9 /\w*ZzU>XuQgHn|}~^q[RǤ=x~jr*Lf4GA~4܋GEM Ll 梤CQdB0;#aC6_߬u Wg|S)c7qo>^19[N[{롧ұBIۀgu5,f^͋x'k s[SazgqXyxEUu95rο♃cvy/V*;(C+O/-}pq)JUk;\U+• NNN \ZURWx_ۺV柣ͬr*wokynqe.: 7u4-%1ߜwt'e'H j3ݷqݠ aq7fYe`8SodpBpU Fs2pUTZkmZW@mjQ'W\OpV9W  Ϗѿa99w$~2} a<_ ̴Ws~aKq@C8 .eJX'ɵ^X?0 jC&sDe-Fk"BA+#RIZCޡ-HLjG3%GHdc) B&c$ Ee~XhYCh[P#l M/L,9ۇOOpej/(xj11OTtM}.M7%؜CFpE'}QH)%bPE+b+^ LM 9R-q/dH~=hh>y,h$;,ONLgVaBI2Jmuv\ :jFa=o4ŧSZr$Gv J+PZRcGmfaG5Mp^Ip/;͓SIt5%oH<;zʃ3w9MWS{ǞՇkST=Ap\.6+D.{SITlȠSDY*[Q+|Sd4p:"( =9䴥#P894*H1̜؎4f3Xcŀne\f_*3ޱu\c%4Ow.ƗO@J!hm 1m #ms b\Fo*P@ T¦26ɂ'[8@9h.)k9B̾vTQ=0ؽ3H(PfE9R"C 6!ɶ.66EJ(:lk(ޚ\2LhlCh9~S7j~<h 8 .=׬9 uc@^i hDLmKx'`86X"3brPiUv)q$ hUĄ9G|#t%٧\gYT\Ƹ(\pqߌ/!Y;킑 P򄯻9p2$hQ21QRA12Kb_x*xO y0xo.hl/[(Q7ӯG=S\+l.m[`Oe[kdwkVn kПTefQQCǠMiTIơ*}Ve/S4b/T$[Wek(ZOBvC2)k`2yU2F} JKvn]ݣTD>>5^.N\?uÝ{M[t u7m|t_ ?o-UG7,v\]?}p滃[t[yar}|Ow"[#Qul<Sjq \945W:t7ۦmOqlkY@w>t6Ty6(=*h2g <=]uy^d*0q<"A SQ%/:.th, hD$H֋z ۶.zm]"AB;:'Rd#CN3(zJETRೱő)!a} t:(Ŀtٛ"#(G€b k* Zs8A>Mbi͊t]fa8}@~v^"aC 3 o+gܜc&qtx[<~.VD@1yM!ń,6j7}I'zL=3=z fg',IP+LD)xzaU!*ٔ,"JYxmHKM&:vjS`C42^&P -g9%慙@69Bk`eZWnǗg@^KTK._)M^]ĞyԪ y1sf0*4!u-QN&9BS_&&hk9X@? 1y %t6lYa$e!/uz3s|S;FG (hﮔ4(9Qa)J6)u܃& 7b|$̮'0)V$] IJ#D{t["f-!B$m^t~YdOn* Q|Ɇ( !c.Jbnq-rɄ,"ϖ/ >#T-|]Aμ.U{AC:ҁK.xZ="ٽQ?O?(g5ViSEHs(S(iٿ  yPqXsr؅p{B!'\i1T6``,]$E+S`&?KGL c'|I3sdMJ}v34e/[TFh8Q.ǬjQM)%h]a/.`F,L;D'CP@0~JhrKPyt챻aqpGRn}]\Wt0߭[ܭ:J)ԃ:F=-JlҤGsR&>Y "'#)JSGi3yi;."KlCML*MO6lsȣ;5f&cc|̣u]h"_f|\]2@y.5hm9 w/nem;;!|T-?]:ͼxXί5o;֖vQŻs1-cZ7lɟcFŸ9a\W]T:#X~Ł2 d9dk- T>^ Mp6f@q36:%L)qArVL.kmi("HR LЁU-jFE0Rc!A$W3s'' Oy-Ti;v GVoĦ:̓?vm ԯ)(#xRdY,MBHY,8@is:[QKs`p6λdo?jED;rbo!y :}BJXwQJ<jQ}WJVZ=(%CgZ{ȹ_26ë-wѷE :Bd+:E{gF$fؙ-p9\Hx8f;Q.#ZZ}U6Vhã#hۅ_~J}v [bYS{jp{}RWfO_B}Kzc9>ۡ2C|F?/rxhfe= nxGɴл/:״قe^TrT6~VfΖuzYBϡC)Ȳw.nY:F?@~{[[Zb;en twzy w+4SN1OK+NCS#+F~X'jY^tLkDzWUdWmPrasm,mKظm]mʝ[ wn*e-WߛcRy0cB>淸.[l]R߁t*(چR[KpqTqp ΰMMysƸNcvcUiϖPiYA@'}g6D.<.]啡qp8-8)2aӞG\ 1D4h!YcXTQ%pB5DBBM7Y}|߽ Z (@V-lMu{dS䁀1(A8 IyqG\$3%#SJ@\ip4Ii-OpV^^Ѣ"D "kr{<,\}l buƆڷZYwj"z">!|8[EgV V$8Ot@‚$Ճ=z"\KmD8x9D,ac5! K"cja)Jo|d㔑02 0t",cS8PF19LV2!v3%f}55+=r89sl* U0y&Yԭa/k=Mn-o[#KqRDm0\$_}CK|&7wMLnxݘ4ʀ @f7\|֤"٭ݺF$?q|T:>o澚6{mGNrAh:Launլ^1QDۜ_rlwǿ=r8ysL$p:u^ξ5\Bc7^$^vЛ\]яuW9.=~2#ӁszU,$97= qn%#JPv<7np1SMkIh;7hk=!Y+ qp!0nxZQDNjl2zk'}gkWQռ^]m`e}V{uٲiE ͈Fsy5v wcPdN(q0#K VVYq&NӾ]o|2'}0hB<ڈnGsh٤LR:*r ' I#KA$<Аd&x=ܱESSFgK4IKe@%,s9a*J2`$V1*o&qED*LCJdR"{܂}cg_)9K7sV%$#`0\~~JIdMPVDSeDT.j[9UC.Y+WurU+PQ)(@K-'IRT BW(N:X則yڒAz#oQ9G&IݔAOҌ·tR7 ҷ: J?zBrj~7 (6̬8 xJ"hb4@*qi{8yZ*V2tHo yE ,@2*ELa#\K)8ME ez06p+ĩD\*9OEӢ}b)ϔTRņ,9C?A ˽*nu" :㦃/cEnhQ7.Vi$8X@ F_b M&@% "4K[w;\ZBLA^{Ko%:{"Nh[;s"qXGm0u" !rVKHy'r™*1̩;46c 륎)zKDFɕ0:z>XD pn L!NDO47Jh)t$2/$Y+qoNfFzt98kyk?>?;w"9'&r?\OG f|__ш!.YfxnV.2Έ4I 4 C:¢u:ۈyE8sfg3>7 Ywf8l|s՘}.ʖjx|0%N3UrޙH[ЀRLiƩ":7Adu[Fy[F2Cۻkrm PB |ۭoNњ>ik}S-"G1v įym0wû_'|ě,-_LGeÕןeie4" Sԋq8|mag[l̻fy}7/_"z8mńжj}&v{g4H=7Զvuy:xk:j?)M/0駽&ߙ܏^z?sK}q50f`V{͆hG'û%',U|Z!?h*Jo+bC/pg9ٛqOk vg pwZGTZH+JI@2ca^H*K Xt18s=gTN<&څh g2њh5D*M|2rWH|9~+|OwDOfp:o~<#iew\r=&AbnUEVq 2Jkͬq+s^oHqO,p#4`U:H~mT):·QιT'ThXw=B`/q^IL"[#+cCۦQozQfzCYڕzuk t?+]ζCZAZ0v2 h[,@[ʊyBu9V@Y:rӒC{RX,Q0Qy!$VA* RP1O!Js[9'=KT t5xyo/7<2%DkrS tGݔ+h[cNh+˪!})IýaD`ЩdZsz0tH"CΛpPCzZj~2& Y%U\maD \D(N$Gsiz"31RHqpxD$L Ż vņ#̚## ]J]D>[zӏJ6'ᮾDkmzQfZ ܣәuKrzP4B bR<0HI [IV{bۖ܎/>'F\vmOTM\1%#SJ@\ijX)2qHB[i{1)^`#a ;g~5o ?6apM}o96ɢn>v;z6UEgZH2H~0`o[$Y,.ȇ;觤MrIje9gH){ f=տGMo,W W5ݼ}ewMs^M''m;g>9_wT2е,i4l^TUc]l }kZ-n50>V7l֗8M 8)mP@{8,'9[!vF zUjG_z=iOO'H~>_=\5L-{"Q^6FslD.rf'&S>žgkI9Vjr!|AnÁf)ˉ86LAd\F[mf?Zf^o#O;<5c7_5sF֜= T#K߫)_v׸h״^fopuoYv,}9%Q\z gOnC?l M/K2:W='Hb 9ⲀAP‚7f`u& ?ā|._s_Rϥ~5KS4G Z{)4$Aާ'4%(*>1Vyh2e5@.zT&H ψ$ ipN Ex*9c)tJ>{Ǝϲc|:Yr&A#AA&~4R pS_kyQ c[ZG OL3ub9̩;4M>e7[Ko!F J=>Xd.xFLqMD$4BhS|D>o w}= #Ky֐J`lؖgv;w$m8 y3!-#=j 8BB̦j`j0jMDUAT;Lx;bN(qb8O!@=;ӧ_x|~xbG>N&PdG);8?qa_L:Ǘ^s|/a^FXRŤHY~KG߬f|@Q뵸 /exo =5jy1,v*wo_- 3iw߯^s{~Sfw~i 5[J?~gz3LZT? 9lCr#Eɴ ތQ.wɋkğoxr)>8ՐCBIg $uaiGc9-a!x5Ġm;d g/0s&g9UQmXhc"1$o]J"e`T<ZíN*dam9;|Ay>kb_DmuJv_ d>kk=C[JMYҭ+MYgm[»7=M6`nȍu3ufm{}k=9C2ǧn.;ۧ Vޭ`|Rv.7ix|p 󖗞pC{۫ ~G[0=n`Q,=AR7-Xs­/kyD}13'9q&EqYsLG2%s1 \͙P޵9ؚ}JG h[)X%ǻ@)霜nNMIkED(sZ 3Bxv߬R%x Et,3jH]Ou42QǸ@] "8O` y*#g\>R{sX!q?no!늭SZ>XyB_0ltxHxH.ag ^(mRI8Wj`΃U &e~'~W Þ:]ո3!JRgosXe[ D"q@i½YDUHIiNO ϯ  %Ɓ>PBZA=#P c`0,Fv9O4]$jrQ"jIQ9LuȌϵD!T5\Pvw)9""T # ,1ΉP:)j!xM |jp))6YV Gb*Nv9͔1uK@4*Y &Έ^ʽUT TJчHu1qᓌYsONpD }.8wk8q!cShkIp13Z[>3&p=Gi+^v86'ݩvz޹$Hcu0Be5>T0R :\)hA9 0`h$3.,AL&T-gBCSbRQYFᗻoa3"vjsjzԭhYrC}U̯oNk&{oeS:*NLrF+cFhjCFVknIZ/*/R*R :᭶&H]Ҟ-U1%'@)r8X)ZOzEYIllC~9uOj%KdJM>kxv"К&,)^)ߧBɹI II(iDt8Rt|Rĩ<:T3cDJ$QjC:G],z}J|Ae rvK @q2+3Cr"m`%iE"Q;)eDkR+ E4e jk T"4pJ /bl_b]@XǏOHY1>ٳHrEN'B|𦊒-jPfVkTD:b|!)Ʉ6`_T2JR9 WHC┡>G[c}XJn)` Ғ9;x*da1W²PwAMFIjocr7SՃ>P~xۿzW,pL"1h-" Lg?RL4,#5OÐ=1βRy 92C1TBݎhJs1ttҥ.pTSŸc_m KmInxwI:4"'"ATNd9ϧ3ie { rd$b)S".(!MIE":7Tke4#Ehp-D(I.(CA8_XhJ sP/pjT,e9"J{Ԥ(nTR2T;yJSu}"+,i';11h1ru@ 4(qKA I!bܱ_$Mi}T⸆@\Hłk iqk"vֱ$t]UXɷWNW{➕7z&cAk/$s5dT &xN3Yz3X+!ra=76p*)cP n#<#:j)a-OtW)r;=B@滵]z9ڡR7:_+4e;WMX♮XCJ 0r0{1,vh%hh}?Ti/aPu{5n&5c^q7ɹtP4@ӳ+lsJJĈkhtJT@A!T nV7[BlϪqه}~n۹up=8gP:[ ޵-/ %=8϶dR&]]_\hhev>14<% _r/lFTt/ϓkғ9[,^wbv_>9G:rn}V^gJTé.H3u]F6A9LI0uT`5/pձ.jS#CͱZ7JDI׈L՘gsߗk@sNA4K>BpJ% mEGO4Ďmdc+X2J&o"gJ!;l5["'5ѭs8YA=]{VϾsX5>ح+捰W#q|Ū.ʶS, K0s{!:Oʼ>W׷91PA@rB0p U88+,"g?V7`Tlh56};ٱ*MJ A9VN }cKh7ONle8Zs ?j"KeDT_nP:V2čA.r#ZAl ?Bi^l(!J)TFҨ8U߂ ʹlrS9"A`WSojon\/~sv}p1BTHa)ҖdE (!&m*Qؔ. ]eqƗp =  4v@UEs .0ʜÈ]neKb^ jSQ6ͨ=3ؽR1Vu[,(*N[rzߟ3;E!!e.8bY3[LA2CTEX#jhr!'%Q\E1=BLx9p){DDf,"qF_T'j tJ5N9OPMg A2JG(B2> ƨ RK`@pLjWdV% 6p-<sQŴK%vC 1ˆ9ڂzu`)hڌlDӪ(QMqx)x8;NC?ixx{7z~ŷC\#oCp9R%wӏG%ՊҦ|]($1y؋_缹9obs?xY͕JsU|)Rc .U!߫,s`KQXc99se/K]sv^fL^zgn^w.+Lr'B~_j96:z]q~qs]:rґw{Ó1SvϷyuJk>:$<דx=WzOG\fuVLOǼhsHr+ispmJ^]+~[<5tk Lf D) Zs3)r" C}&++"`*#{f,}6r/Q4_cSeGғb0f,"+fQ .rR`!'TI~5ʜ"}˗pxy6ozUZq;t'I4I囙>%0LmHW_Fg0P:[|͋.+5{_B0=i, D˥cU7od"abDc%rѮ~KV{ɋۤ~ʽrֲ 7ܡjl͗ w\$ªI1 bw)Xؚ? ҏe '_3ᷖҙRR+P*wa[ȍKo/SȔ&&grv!̣~#=ۥUkfY3:ȲL~ ]7FۙiC~H`x;f!9IL[iaCrO֐g;}e6-4sBp͏]5JQfm+p+5⤓g%=8dP ΠWcrK zۨ &lb\EQUk@cX' P+}%#K]K1܌jbc޵lf7T0sPLa.)o!eq/Ӧ͑m.#8p0iO?aO||LR DV(\ޤKN%cEz.X0ss ޵bWijwrd l}^ՕWCJ}]9Ld!c_lL{c{#&Oq-Y`r یgAk7aWÎ{šc.-fb*1K”bФ6o H-R. @jw~Y(GTA{ק{~%võdv[Kue,f)±$"F׈aeWI6K.HE|u~1sw^fzfekc䜬.g/^0^إTjE*13y:]|Gg 8^9|Իg6һYdb^J8K66S7J>7OŎ=  2_6pSx# x=)u[-pՇ):-.KELDE~)R~ 3׵ D]եjW2 ,ߟل}@9ާW>nu= lҺ>tT#?Nc]-Vn= Mvdwƒ-:ҍf~y*>ܻ:{~wI,CNS@ P% Rn$EQ2T\l'ቋzyj#K~9hso?x_?',M4f"^ZV#o pB%B 5Yb"^@nE}bz:{nRcyc%eJ=9sm"GOz㴪\֩&0`lٗ:gv:ɾf; |jGIw!䀄-v7!J Č(XKl8Lջ{ƤZsF|q$%cƢ&$E]܊Y #a01fΑhzAڴ\KUk%[i_:a};>eR>81;`[{}OI7pmi'!y :*e irvt)7u7J5~X;wߛ$ U*6@jM!"QBM>KË0ҡzz[WU.1oIeMp51rElC h`Q\2K=*v˳tL _~ ,'>A)XҴ|1ýjn3@:e/gr&&~d)V}"}5k[50Gɕfy~xokg_>wmY~z?;'; $6zJS$MIV%HD"uWWuNN}KaXz) elA۰ Tzf<ͶV;0]wP;eߠ7XCc1[t0v>jxóQ ,,Ӏnc]є&eF~fƣ*~T,b"YQu[Y?crFigt]r$`_<]ʱ =i2x;sIO ЧW|nٕkD:iw~*n:La7uxH0Wbp0F,阜i9ԕer ib~o_Hr# \{vxbEy&+ K*>MDN+1~U_m=vPQ_&}Պm >?NWfO)7@ ,_ĚpYvs JWV xŞ|`x$dFq-jOWBцg߇)Unbw>Ә7}u+l/. *Gϑڬ苪9J\͹m֞4h9# R3M4dMƬb1D^-Z=kt"BT6%9V[ɎydN',:30Xn´z56q:QNy=Ơ~_`p8`/N0KIc:Y`)3`:-` %4i'羫>3qًбP i|%4l(Wx%ҡr-m#:{DGpTI9˔ۤ!N YQҊ+Iy6f8'D!a ;7qpG=BFk . _yztMaYx}_o{٭qplڞiqXp oqq#]!JjZ:B2 DWXB1+D+\qY[:VFDWX5)th%!NWo^JbeŝɧR}ܝJ37b>wy2q1+aO`"{'_޿}әjs3FLuIڣT2R>{0c;oiV(J hZmAӪ]Bm]` )th?tBtut WD5++kyS Ђ:tB\tutq__|t@htKaßW7??%-!JJIiXFЕXyt(Q-]!])*% +Ec }[ϧ+D)IKWGHW#L [ڜ`)th?(MKWHW1& <9*ḩJDD#]Y&nR[upjLr QOlz}Jd7,jК=P[+M^^@:=2B[[Y*snr#/ d6ˤEaڸv컳᨜CN{v( dp%ZNUk_0`h]*f_AH[l`\O99AR %NIȝM䋎3ߟtͿQ.xw'+m&YUX3Hk] ײF-[G#,sF:Z$*r!I`^',bbeg^IBQL'A~ Z Ҧf[Z=Xkb$VKQc2K9})yrX6guyr\Or~Гc84卡+GLS rrt(ns%%o]` s+KyS BBtutJ&9xl]!\MBWg=@h #]I_\JWI 2B3xt&M4g¥]!Z~ Q? 0m]!}}xJEZ:B29A66ns Q46ˠX4f = u!2Olz+hFv Ym)N\6v -ʭ0_`I*+|@vh՞>4mZ޵q^ +DWU?Ko|BtutŔ601tp9m ]!Zqt(Ude`t$;c7),fLϿ;^I.vǣXvomY>WRk:->qp2-h : bK.64Sx3 ~g */cB/~]]ЛuQ-a?'#9y R}"(ςz%JI'V_ӷ質U }ai4/+aFFmö^P'ۛ3yPoNwS]/b2@UTKǩ(bt6Q+BV (y5es 8b0kpca٭ZF*Mn27p>|N:x{Օ~ԅ`QrQ"Sb7smN#tv ew9<VqjwXH&,goyC )Iᦛoj2s*hOJ> ,nHox02+^_2-m]uS;[\;O,qih;?WyU{[!eNa/Fn7Df%:dEZӐ"37pohι%p2_ r/W"(+qJ @1z\J8.pR$YJ6 J S2^VY)]`݊yCwzdн)R9 ;daE2-o_[=z9D0G3WE|]w 7TR|TΒ=,ru@ҹd`w@ݻxo;!Mq~(/T?u.AM_]?=v~2 *sJ:tYj돿<9-ݶ.G_\%ia9 BݼEj,s罥B$2LAhRYL 1* U>c$ꍛ ͙UG:,s*x u"HAq%r *2yWv$X婂mH2KX! D3.1*^kbB68wt6 KӤNn@`o?7zfW;7Yz?:y1 p @SNnƣ&*Aj|_69>?~{\o׌ZYͣtzk3ƀ-g8IؐC%&C1& QPfL`9=2&fNPO &cEY8 *(e 683c}Z6.2vBS3 _ lKTf70`g[ޔa:(7W,A_Nqfs7BE9AZd (B):y0Q䔣0T* };,s= i+JDl|;blf&psHmxRۏq<֮:vem[3kۖ[Ug'&0s!iVL$Σ<FJPN;& (T AR^tIDXb&(Qpe$/{6q>UL1!1 #e;2"'2<[F`5-5F%B 㒅HҌ d B(C%òҔG-XF#'xLh4'̈y4^,y:XgmV+/y,> AX#S4eV`VIa:Ir BKHny%xP6ؕy|vy74]TcF|oi$'Lkg?i|eo\^ԱEg{HW!WH]H"bqdE?e)RKRRėDciTUs4J"VQi }x*^Y9*ѹ..,RcRZ 0{2.1>̱0/ ˹{嗅5͂]mvv>twa;6i^fWݺv.T?삦AC.o{.:htetՌMsvSn`\/@LW_ :ǫU--WM Xc(k*Lg㘖;3?aNb@뿳Q3JRJbpV g,3`y`2霘gڛČ%G{ݦ#mtC})m]tS0KC,ǔ⁖3dF…) 1l:$f%v2{FHۢTR/3zX7ȸm|ɛoo[.-Iȸh_YdS}) KFbNEiĝ(E´P(cXL\VH ]!K*iP-] v6RBsKBmiGE  SRc@%(-}H[%ؤp(}P-T6(XTLM£0JL1"C&;Heqުt^OcWfInWq)\ʓ5Cw=%ه>Vii^F8ZY+b#cRި3[ohZr9L9dgHA?om # CF2;禄N[vJt }Dhzv}=W6ޞ6i iˍV;[)+ +dⰯ<Wi΄P=zZqg? 6H'$`ZDR$92JG>`R]yN.ndb&ey,DaƀPF ϕL1NrDNΦE:qZɯiKj@ӸFK\zA'o^ڃ6W/vV%U Ɠx9XߵWH\A0D +B^x} ;1[vs+l]3o6޾y%F)$ZJJ>>uiVY~u~^")[w! ?gUL+7~ɘ (a2|p8jchM㺸?}? ݺRS?K|t <M Rbm9fȭAi8M`uv+qFAd7,ÄTY['iX%!ՁaH%.'֜4c,J!m P-4#6,(n c ݒ0Ys umE]ͤcu7g'Ӣ֖p)j ;n}']0 NeBOmX(bejk}@%1ޖbGMf[g/H|6tX$0{Ʊռ&rsH| Pc6uQijcab6z25 !xmE1;_^HMGJe#S|$)QA̎E@ N'+Ӵ2$!Wz(!_W4}#oydl' <7uϱSC\xTe*J%ijiJJF֯PmTZTj)e|z;Fx"J5JVQ8N)_F}ʆ8Ng݂KN>)[RL6!!B&Xk@{.tk+ ̘hLVgJi;t:efλvQ;muFݔTGݔ}g8A8Mt-f2.YҙV0b̌sUL9Nk6iB]J\AZ8 mޡCOSL EٱXL`MF/xIƖײ^6S hHlq &(iuB?U.};?lۏ :at(Af rdF+UhJ?ē#AHT#~oYLsf(b4NMо+fa=Mh )>y+-wJfҶhXbM[hRBqCCz]yЁxOL|3c4^6^cS"'lHze,.L{U) MԇhP LN0BOT6ۊgH?|nb͊g8Tf0K\͆yp̞}KNo瓦9?v%=ͭB`ł]q|LuKm_7* ` P9_{lz4m|oXITO:Xl2;fC D`JCAssY̟ŝ _RL&[0/|]2w%sn׻LKQy6? g, ([iA\ & m llGtߛE^.7 ]{#XV=<ՠ-Rz4į%\Fw:/wwQWtw?|wIY*Gf2b@8'kl9Gtn~d! {B_!6TJ]ϻ-T2W!VR2UZ*Y"}lyߎum˵rD` t$˚hЊC6fI3( :5Nm$+D> Afabga'%(IJdQ'u Ėb5WulQ|I!}'=^4HxuS nlSxV3o;c<ƜDzE0c.Ks[./ݘK\*sИSb~M &z|O]FE]%n%tuԨSWP]i$&u KF]%vu*.]]%.y*~ԕr H)|vuÚ3ؕg>ۅ[9ڍKraƠA]N]=3XsT~{Zr^kﲷsOWWԾ8!~!pR]mar9VYЉ;nBd[B9ӤׯbKq 3veu߀TAAأk2&\ a:U_}W+nq~b`|NG3kba띥ܺ;cPON3w t_ IQN)93Zj&zGͬԼ6fv9eS<`s-:2RZP+SݦL‹1H^%1#^%첓:[[rd^%/ˬfF~DGO?&ǫW*7|]o B.(`_Byc[0 1I'db?oz5_VMK~+㕲8>fC+oƛ7,ǧ6T2E :p0 Jމ@0ÈwuXNvW'dwu_A1yH8İW"Iju-.Vt.%eH3X'lnq4י}߇ٷRfw]:3 7ŮlDG?2q~ 3R | r"gvP9 F~ luB[/Dh QR2QpiESRG$Kx*.u*p4pt"ثỨ2Й{)Ocݲ}+j//~VkzG(I% T&Â10QR EA8.0גG'ԅV1Xזz!UH>H=<qlh#@@-ٲsk)]TVAzZ<&ַ}쥎Oˍԫ}奧y8Db|:{Yk, }ZlTFS ?#+XhH19FJ-)i;pdN1t 9QdH %Lj.e\)k#Gz{I@_{qvO yyF@n5Rwۼ~i\Wx#dV(oZQH&$S칑k,Bki S<(x;Ѩz:ibOۜrF`Jԁz>TDKG!(G5Z-iE$n\?R c^(3k_M 6^2@*(4Qp"4 3-=GV}kٱ-ź- &vEm[v=A0l //zIȋTdt9.>12'8q42/ќy0"# b;׬ jM/rVޭIb֭j>{x2O&U}&#ot.\V6Ճ~,~ 28 _bd\k33>edG-^aI~}]zk]8|NXu#q 7ѥ .N^6K6ٞ6ԫ[NK9l"VdWeʅ_$)x}ɓm- HWM*k6E[e]u!m%ZׄR(cy՞M|ʊ ?$⺦ĚW4k1 5K5ӠT}׼pÑYm5նZ؍v=ulPlxF:Ѹy #׼=ʱ\[5ńJLTQHg|]󠩭[?$뇘mkc_|^INr~y9Η`}(ET!`Jq!PYE-iţAQg՝EqY!5!5-+p6 OHkrDg+jh=5ޠ7ygʁq6"S 9Mq$:Pp"H-r ֶ<& ^+J/I`\^@ךG5XȢ=@/xGe]l^9-‹US̥z-ժzjF9^SUXcj񒡔r_vgJ%ƚu9r*f;+ l$GŜaM2G+!h^|bU )xsk"?(z_eUL(0?1]4I{gv)S`šogw!Pv 'w~6qww}I.5 ~H'N~jϼ2RUyX~BND`^:FG'fnSwc"<כbMn(#: 6u\:$UX.G_E܇Z>$=R(x?edJ/%r((o"{wV'"W^GF> JQB9zS_SԆZi6\f^ԣ{x?HO޹3yZ8{pWf?;f#˜/r{0V -[WNhaќRGi 9F#8Ұ )!T]d(t4 汏uhfvхg:/tTSTF &2po@F&e &2y>_Tfkk2gf+fvA_|(냈W]y9L7rlks~?mbR9c6([8K)&]芘^23)BR1i'䒄]cJ#1f6HF'g1b (Xe[IEc%tIywF]/y.Mc3a=u~5bͱCGgBG#zI"7}x/YZD^rÜ9Q [cI"kCԩ :5$R'DL`UH -+W,:;h"Xr©r(7Kw sL(QmGsT c1rb LJKƶfH. 8kE^|q# -3՛`}}U㌣@a?LBYP}W Gq10 f1grH8+!h\kQ*~@ Vt ULLryå=3`MLj9+j$~z@Mg\\u5KZ<96/ Y gI ui {B /nlotܻ|4ܼ| ?${)qCV)f/Ttb_ɉ7g}IAL l)K;;La%*p][[;c#$Aq[NHsޑ|k",Nf9#LS+C.# MWH“:::y&I8ulӷzo>o lOV(Bs- in >( 6 p&6 =5'\,D:L! 5TRc)bAĀ,8¶,uG]o9WG,ȇ3>OIc)d_%YWdʖ'VEUzx Q{XU_ VEa]Ԯķ[Y)!٤>v*npuofA6kikJW#IR[P{Be'5Khu *va{< F)_vXbÒ>plW-<<:h(h#oRPlSXXe+Yx⟨SR?*i0hc #ʤJԄUsDťHW DDʒ%2+Af. C8FC2#7׵w!]o F_'_N[s]x ݭb[b}zA{Vn[r1Bp::%1$Yh=]FK,Dn*{d9:z(!&CgOg sr!TґP\8Zlz7 $;v%l8^cS'NA i+{4C>ũM\ ~yed@4`4^E쌓:g:aޒa!H)d>D&LfsP1)zH18-H\d5ˤKg ]~FΎ}DLT@NxH#79 p:xȠ[*: 4\F7yku XQrMmH*+^$"pM7a]Ŀ95n5pmpNj ɞ hm5os]I{rvmHţ9Y{`+JIߪ-ޢpHMD,!KBlcLJ Def􆯊M1{JW䂠' L֋RĜҹs.HFj쐪]MW8c_,X o)*s|,j[dqs^ 7Gl!r%X$4\(0x%Ҝdnq 2Cq~(^HQ`Sj4Fh2v m`9̂jpLcAjܱ/jʨm;4؍/ >kπ'b,s!b$4kIr 3N8ΪaVڸi\ 4CdE aM$BH&ő0>p>fYɬO>įFv=#t۬ рg|!Ȅ!%]|JʈXx.V{q3cuV}qQTEbR`,*t [(; r(I(Xpx,xX;CY~xx\=x>ȭ8?6b= W}pU~N} V8-UH൷MTZd\.̱Kt0Q)+!0mhS "E}LJ)ZYD! E$ s-$9V7X~\յ讳얡go݇/Hxާi3rEbEa;7CU\?ŜwCl-R 4 oYs>V(^=,ܽQSM^+ӥ>Znm4_͢UX䒵zc7~v'=Ɨt4]-z7H7 =ub(ohUpR^kppM4ɒ^%j%h]X4`Dv8`.Q~%;"xΒ.#Ev )rɇ\]]~f<zٯ3<6ت}]CU ̢\tGuO_ WKjdPTG"3Vegz]^[JZOxkI&E\^(mEUޔ*oJQ7ʛ*7lRbYǓaSr[D2KRqkbp "k+j`OUfSy{ٲ^%ʹSF'?&3{wi7Wwo?`O|!טS)+_bQWVٔגN N \j9v*T.9A;z>pe4hO`TӴO JTWPjF%O Z \jQ;\*mc9•8\,cBWDG]*Uf n[I%\KGCR׽3VnKOtWhHT_魑VOoqTD>/8OJ1|?tJ*q4azzol=DO`+U!UְcB%!\ % U!WvUUP\=JY-Ng \ *pU< ߖmm߬YoDž-bKYI{b>|[b_&oo$xS 5̒<yt}qs-B *Ap8}_$q]JDr.&vy*@6ӕM%ks Z1-R@2yNHư IM4x~%ܒzwYwic˲>4F *'5@rh \S (]k *IP:j J+N'StN p.BR^UWtJ+"q2pUs*pU=~튨Ds+5|<gnt՚4 Fz!)\j)>_BoSfFԏi =_F۬t՜fuf\>To?8LKa%65h5rx}q5j >Oץ9]:f(F w^4,O1[Ul:{${Yb]yICazED Z!䀑%"x.KM 3|C*Ņi],+M6xuR`{n({Cц,jEb]Q(vJ'ތ3G;7PwLj]߳Vkm/`WkC,W>h"?6}H*̛0@Xk@p$g>y8vktl0[ZZ ߽\1ղzS\0TF*NǛ!l㕷Mt$l~v3>KaVmPyw (ۼۖXiku٣.vC2vZ;o M;nޖ'} KN[!hH+_ۏH[qT_&S>Xg3۳1Lb\:~"QTe\Z9eֳ?`%)|ЎZOY{gybF__qoT: Nl˽P0rH"}x% cJ"ř_Moϧ.{K|SIKBķd+Du.; * D| TS{r:cԜ{_<*-9T-I YD83p?ZkOw gM2feχ͸@O;ݷg?h KT!:eF$t\A2qsrp];;4zU?)ԂOEb]şƍ{1m{ߦEn{ͫf/vU q g颿R{3GR.]7ίvWbketn ,?[j2ޟMo ܲRۀ)ltΖmZ+IF</"Xju2!^\+O%ċa=īP,2XLu4z$>az3ﶂx0dbDJ*"J#9ET!n 1BvKDlx=n+cc}z=B[jbECGg?ElQ*Q]żW xfj6=Tד(dcuf1)&c6Tr)g3]I9{=[o/fBuRRZ8cF:/$yײkڣŪb`:,Ǘ4뢦Jc=X"g9d@V""}Nb5r닡7a*|^#_o\rJBij%mF9ī'6r^=|t;dqO$!&kZ9O!YO9`+b @'ka1$׀NK˃U>;NƐl(S>DBKT4rx'D<7QNɐ8KW:K~}l+/X _f9Z_CUa6_翖FUKZE/}%8Hf6N#FøB ^Jъ0ah8z|&*Y+0I+!)g^HǢ]1a9$*a@%1Spm`*Q@3ԐR>b Sg 9h6Hiogw~ ]=!ڇ~Z\4mwsovRLc *XɉiHhʼn).x р@mt8␏' e Ɂ\J1oHo"u <-+nѨ'@)u[p'zv?K?h> F*NkSg:{$8卭o{(+%̓+c;_֩ z2]кY>N/$C΀srQttk)EH drjw ZRE+jP 0 8'DCO>$m#>d@}y Xx"/ t !9Gr! 'z uH1$FEJaTid,΀b\b!BX(?Sf*515stw~ӠWW#6 &F$i(B)::癀 *E,{FO!;{60ceJ{% L };&1a":DgjK&fǡm Q{d;g<Ļ4O.ُ!.ΫָNi,9Ya\#.31 Vј7H IR HBKIqx*xXlv0p-ky;rjauE?n~|GA&+Qi:+z_-;_)U'ZAȯbD N[[o~Y, 9pw9GlY vJQ氵TrVmX^ 1yd%oUhH(tBS:B.T mLwvMWY>_>嫢GG_Vz5žO iI~?N=t ~ :țmiZ+T oΟn|oJ!;j`nu͑Żnڰcrũ·zU Ugq6}m2kE>^K훔 -oep$G=WWÝ$RqQ!UcSLǖc{pp*W kC3F;K1O%ai,_ %޹**|oh)4gVFÄU=Gͦ3)!bv{l=?ivq=*tQy_pu޿$}pnsCUxkgGztivr (6+12R#r#)AFvÖVcE Z"Вt,Y`{Xm޽4QK1!E%!$ުcD(d"@N Q C<;Bn f1&o,V#`B^Hާ4cpl@rʙ*Hy<&טkaL7&;}R`m> s69d$;"rb0xk-wχL-ΫCө #{HyĘݭޑ~55l:4}XlSmyfR_$n 5]Ń껷Ub]Nz.Ȕ8®7led$hYr܌Nqc'餛KHN(!"9zրp2z.(5s匔{p#>窎J'aSJNz˧Cˋ%[{Mwn8kXpIi_4M5i&y$1#_})*Z]*ܙhgj&%,N]VMGoϚ"ɷg?7>%>?M8R)ZsO4ux?>I[0]ԬiZʇ>.jY msͷ[Wgg7웖[sDt ׺,mI6'M5oڞk+7< =<WON+?'7X9Xc>n L62a/BٮnqyiR>ʂn2H?uЃw^~=(zރX6~Z҃7j4\\L>&N7s)ބzks󥠥wj閗xCq\$c8urJ=.!&ji.jwEГ^*Yד)'l6\w%7X_l_/Msspqr54L/bs0{D&_c'8?|$RȺ+=)Fv4ȏxA{Qxah`82b',p*RD)cq(A+1@CvL0CMК3LIl"Hn"YZ GESgTb,'?s s w=7w2 hM,VytQKl * H2hn@\XWNBb0A5Q+06om[݅?o祝dOd|po:߼#aH_4J>8&88L-sζ(Tsm8GæBMRM_!$0lǫ,C~,\a+Qez!-;1=X< FDX_{C&vrjb1+)| iEV`##ھ *prT?Wyy-/z|:(V*Nc(4%ɤY XbJej}L0ǃRL DL &+3+X#|yDU)rSLcf\LK锖q, K{udvls)P>C{ ?J3)F%QI (f4.cӈGAF/f &١qdhR_DJ{vnÕ,ėJx6cxaBDց,x=9mެppUNq"'&\51ϓɿw9h~k}lm't=E{rܺ ϓ_\n*2qȎ]Y7[aSducS;^z{r'1G5Ql>gn 4$âiJLpK1y~9K~yDN{UiojӇfnY ~vm_Nse:^иQ2Vi4^' vN,+Auα}[ci1x8A򀎰9Jf*T~mro[gzL6ٵ祶JR/w^(/*·^J R?/Ǭו߅lm-ԽeۻSr6IsSLPrBkAb$} 9)Qࢷi=ތ${A/"3?9̮12\ƅobL}FtAs.vucp 1cϓkJIܓʹ?yIz~Im5:d5Z3N;m͕u7$O٢VJY%3`FykSaJ`BTsB$`H"o 1딅r-s%xk`l3,t*3$ 3?\N,{Va#C?=~EAϲy65@*3mYYOROZb<(tR W!VmyT_X \1`GN*) 0FXc9y*֑ ,"{BRU 0=[ηM6]'3S*f!G{E0e b%ۄ 2Mv;Gm-״!%Z[,<.JmBciAȐ,Y.+ iXCڥ{'J,BĒ,Kܽ{JfK04ĨpQ' isPj(&ؿd=dIQEj[(aӬq'ߍfy\٤&y&/*Nn{u{y=޶;]~Om!PIG]#YŶOn'i)߶/. 䬃4.$R-?swWٛ; jڍ ^o)N.əRrOl/dv 7wqǮm,{v耵AVtP&2cfqcr)nDǏsUXq"OF\Z[oƮ}[W^ѮVUA-Ϫoq\>ա.>4p R\SUissMƖW(ÞoPY(n@3A?F떤&Ύ:UȷMe/7{?c5hTJ2ux/8ٜ/oGdHE !5<ނ(Tt.E#kE Ӵ>F{"&JrW\b]KvdQ'|[8gY61mh')L?/<1sL诹[$rD8tE6l uk'}YD[.iA;> g F(惙^Ҥ(Rs}u9.O׏_]L>vq)ndk>QCEIO_iUoӟ;U2&ϛTQd *%B$LZeqX-f!q5RАw1s*MKV8߼v{LfT1c|;kf&p?u}!=NGP}1wmƻ1׍9=gHIewRZ1VoG *uAmګEjW;W(_̀hICM?H{BIߥ91c?pdc_4VblvԆ{kgf[Yny]P5|*;VPZV}jUʕ푪};TM3f< N*:ZpAiWq8շW):DVZq(pi5WUJ:&HkgV󚱊 ҺgWr5pV65'erq[ܶZ]hnUu<@p{ @]?KWqmt] V6\Us]˟1+?nfӫ"SӵZJ:"lAf9+۞-l۴m7?6ྊ5~_k%[͵z-{ԍf*.VjSߊ9`l;ӫ{4GIĬ{SqiͿwO=l#|䰳wv~߽?bR([2V&xHK2RUs?Yo8Bs[ToBݓ{Kvu{rϾ8~wWg}>IB\Z#cڧray%E2Vj :(ĕCērV46D͒% rd>Zd2>$b=qR{hM6WBb͓vgoLXCJkr9CCe6XrNX%Rb8N03˒/; {*rݧr3ӥR)MQsf(2DJ0) 99Xq6 ՒPwI+39F(JI8E,[Wbi=CG4CɑxwE9 '!qk*FZn$ m <0X6C~D 1caUJT5%b<1k_0KUkG0b uQ9/է|}91Yc}"*ij_ hyJ!:K_[JRjnvfZUPH(Łs,ÕDV,+Z ƣ5"<6`vӹ6ps%yz8:k!`cAJdOZR Ԑ{q'/FĒG! V4J4E`K f0hh> [30,gQO2K o; L&.71>a"H9d| ccK`Q.ڋx`fZS !uGN84%O P1i_۔&ay!X}R p Kd1+$8_= *ˆJ`N#&8؟S`JhvU+,}XwL5+t (]:%-U:Xu򘄐j8; `U *,BК<4N"DiyE!|0l3z:/,Ѱ8P=@`AY-RZ[FPϠm@MG dA2X?`yٻH#We-XE}Y{!`6V5`{ def {MO75ZbM=A%h>ob8AbC8*stK|؆`lZq  5 (ڽFXRrP/a[}Jh vscZAK^ /B!Kۅ@[vPhň]mwyCP΀-GAk_XxH̔dȂJ[cd#b x Z!8DDYUբ5ҡbaPB+; Ft.XNXhͷ]6?0a%ڳJ& h%aD[v L)t[v" h ߠn$,BG `lՈϻ%y S ![j M/8hn t:A/76-%V76Qd,UU, @ztqf;trL'sjc a0N"nNՊZSRu9!8O.K fc ф\ a0#vI" väDx X9-)P.O7"fh-:=tRT-T@=?ɵE@j0FK[={"2@ʝk (d] YWTi'WW@`~ ?"oZ^0`p0 (,ʈ1"Ԧ"`$fya{=`U+~(]ڈVb0QԠ[۪5X1s[a#5ځz@JQH>*T/Q1׎-T5gB'k&/~~5ЕNhADoX aJ]Ua>:JN0F6K`+}!0p-,#JD9hk 6 8% F(5 E!~a Au6f [v_. D.fxVxY 4I53nBDQdiBv 踒f DἭ'3!΋ "'f(JDolf[c6 xcMi`PsJ8lXGlYjLi Q0,"ݜf9<ԃ.']Pҕ/ /qՋ)8H룾J&"h L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@zL bQڊ/ Ds&P$; |&kd! Q0@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 z@Z/zf̓@yg^FR3HZ=Hf3^#ϭb&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f12$&0 R&g252d&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f1@b&3 L f10.ֻU{~ݔRM}z}M_%_vZ|w?ʳ$[$ /i6q) n L\z ĥ C#K:q #<}1+={sER\[:67C`Xm;hޏQȳRG;.L=Ho}ѧby+_W A s$呐Ihw~B4ۉyO^G4-~(bgrYF}h6ŭ߼'g:^9ySL9h?ܾ؂(c?WV 5v4]=_ǧ j?<3zyS؉Rc2BVT߄5m7WHA<9n' wW'p7i}p7)3 spbpߡ_cH\_"i7W$e`s͕d4%ycH\^"iy护 k4WʸjpnAF6{[?(pYPGY>M?]!gW9V 2ʥ<`9n.څ=ki}6.;5ZMv*6Fx$Ybu%R bkݖXloY?SnL&; |l_QYwgoSg~DSoN/ /h[e#I%t|\d5h>WjEmOfvtx1LΗdB u1F𨄷(T]+'vrdZE?]M0;}bqH)h\Vu֋ғZ SK;Z@!V?͗޷"C7GϦS~Ѯ}{WN?=1I2 \ z~Wz'֗23 ztL'{W/nY>g)ylE_y({ .M:)7̹^;pwBAq?i]O֝z“8qS%mL amUD3WlҙIԟ6&YۿkDjukRiu'˘82f|$E5t[i&!pK)-p>P}]ޫ`:q."_9淳I]ϱ=d z93O|gkmkGWgftWwʭ6\cfe$s_Z2>dYIɷO00"b_PV2-Ww֊瞠ER:Nz Z>d^Ðk?,#kb 1Fئ!5-Gg}W'Hۤ0"$[f2 [g {rCJzpˮ%TX +M^NX}T.9|qJK]4B` 4a`]4HhB!Vh}qN.5(z>a]nNB5ϭ_-ݚn>O?)]w7nA7vv9{`N GGy&f2@m29FD۴UeњȐ!(|)H4$ _ ' -c+[E%ȝ ˡ:2@01' SAu7|kDs3$h$g:&]f<]\AI]/ @A}"tdDǚ:4;&u.ԇ˟Ylk͇} ٮ:Bt"6mc)*kTk`44u饋ɗ81}/GgIh~NT_I!XxCR]iYt1&EYu)s5#7tV^(v$sR7,Njq)Cg<)o눲:-p9  )=leocnq:<ݳ7O] 9}B4^w+}`*~tt7ŷ~ZIכs .>lTz'a*7U.n3{"T ԃ̦C̦} @{K/Sl~⭷#{XvIw fw?GeP sxͅRJ 黷˖4tc+Kc/6$fּ]jț7W>f!N*UC嶈Aus Aע7ԣ^[R}yqj8W‚ &;vz@ymNF!0Ro*#* CM W/vwutmv&va 5VFvF}Ih\q stޟO(%4LİBfx7FӘMl "9;cǩsy{Ml*NSxM1|v:X Mx9 v;@h܏y 'RCQeǝsmG.&Q}O2t)9jN8̑G™Z˲VBF"9gz_q7"Yl ؇L,6f&/%G%)ɲ[7˔%٧%SUE6CLx@emT4^cA [k059&KoRj#EY(^d` S*c 2`lD!dJ&Hin C}l !1GR1PyQk1ĶHMv.[*rU:;U2XmȆ6&bb Y'+ i "do$S2PƎJm~[[¶}p\.} H`xqqɜ8Zڑj!ohvڕz+^|v-s3-^/~^`ȡ6M *4mjГ L%p}ZJI8 è'dAeCkEm6%(/RU']|>h3o#y Cer8[:cȏ ۔+9緝__v{Va{C^cgK'Z :bttJ2c7ɊRE#d"r<)e, y4`ʾd9:RBLVf1B#ʁۙlKy+htWŬ}ka+|6Ưa ,ũC.rNƨ߰FpnkXj<LƫqRL~ePMRʣ#ꀣp(9;uUƀ7G'5[2yr3cYbȩ,sbCLJ.%@M)z༷ qYd.s.W*2&ΖZ>![u tsN3x C %V]Q0UBuJrߘ+\d(gO19xHHȣU&jajPK͆m_[ikS"v_5=DŮ5N95i]aAL:yn;v0S &3^>ْOsUTQӖ[oQ8D$]&IR"LِC%B1&%%l"23zWEF&ǘg1FYrAГB)51[/38Ck8W2Tmd&ոJ5,b!VBpX8\Ma4} mN|ndO4rtwyE/"J B)CYKH4\(0x%ҜlnID8{. &Rؔ M&ߎl"9Y0XeU-q6# るy,]M;vEm[m=]" x"2'"FrN,! A;*f r /:dk" E4G(zUd#W{O8uK "ҏqQ DܵcLo=kFZ f}b8Kl!dghFJhCm 1A p \pG ʍLytC(n6UK͈x)ѭ3cuVӒ]qQTEbk`,*t [(; b9E"Jq0{\. Vӎ]PVCF+~\3rc4$O9 Mf=nNH@؞>ר cnU&*-e2iiMusGX7Ε6Mфwg>&A,D! E$ Osj檻Ow_s=f7L=grh5\BO]f{\8N nht{et3~zZdä1띌s~5 idC#g]_|?x]-f^^Jf/imt~ĐMarVvg/\~Us՜]l}OKcc^rlL_xz0.v`{䶂.!$@BhDנ a-$,C{oSc>δkuOu+1;4Rg 3ZmzkP?2uG BN{`%y#9 qt0'*P{ + 1V#uRHߚ2< V: m4F E)VNyn齚8[?&GCyNɇO[DGge峵A#6WKT•6Z+}2Z#B1EHQ37KgL)LBSxFO'+GWafȿ avH1r!uuc|@w|6G_[|0+_+Lqz![I쏾VͶ{*]Iz8]99n'WYUv(>Qos8fvmp#U"l5%Y1bf;,g=lJ]L%үR/qvz4Ҟ 4I|=jیU-wL&* W̠9xlUi6ߗ4'"_J:!_y!|m1ʛ2kd[٣4jjpP=s&=fI֨1VˈiKiOfU61 {G8'Ǖkcȩ-Uak qDs$;Ng0Yɳ2d(,a!v*D+A  뛫XK7NkGRA2D>P%2܈y%r,Ӫ4~[ q,k*] M~i)NG/XqDfxS ߌE|釿OJP=.Z4$Ye@nGqE_[~7]* (5 Z$Yh7aUӇ'_E/7-ZzojC8v+Ot}̶ f7j`ڭ>s}3N`ݏ? +W4{Bx.xtK<tt?3n/8i.05bj89x㈿yd27SO3iC}KlxgN=s#>;#> 0]8K oΑ2-;Wȝ|U ?|НtɰTQўn XN\[U~~yJ͓ZOHՁ"UJ!~vofϾ +8zWB4%`6`$!fw;rGor%z?ܔ8Iy'.ܼ6^&'?yq|\%& Z /TItb ])*AGoXC#ɉkݥ-`[^Gh{@.f.sw6%=gkf2(Y\.sdž|cŃMI4 YlQPT%ݛվ'ة.alT7]r ;qD<0-qdzɀ6jQ!VynbiTeUBòuiew>tqp) 4<cj?M}q ݪ[?7?/"Gz4&_?AxwY9Ors+ByzV%LjTT(A.uyƱ*^5RJ߀.lįG-UNVտe,k#Zv=zD#]۞iiǴܔK_mn+HHN6^ h΄U'[![ Gﴫ׳EV]Y˴ sY+Q2%WD)H %$K>`Rd/RL8.ԞWzzY4~N?5.qV{*RF H\1I(a-荷v5u]Mб"yq˥8i0es>po޿9h}B$0 q2E\NHԱW)+`%<>+tC>* v*pUspU\xpv % 'WE\OUrT{:u[z iElUI&yiR<.O+η^i .7e9aS >[յqOXh G&|`<tsϨgG!>r|[́wnmB\_ZuL0lo=)4pٮэ鹹~1j."~-_a tFɗa(IcL.#t!_)<7>vXZ2.cR3? *(ۜ },UR*[CewosU9*Ǜ!7mB @xYw_`ZY{/mJf1/A2b~ٙż >0 dY]eɖ֥eʖ\BdeI1NÌK%tV{eNH5;%x*JY{h]r%'VG%*9c)iNO& \5ko<5+ɍp•p dઙTʶfGBVB&/?λ?-K987CK0;񡽢ٺB~mcABAHa<{ڊLY>p*^xԊ XdOxiϿqu{f!=ߐp~XI9iTAFؔkr5%8ݲ+}..# "yG)Ͽsp q}}s? w;<|K\rnsrmd] eFro7 8*vpH og[u^'5Hr"/bЩK&oœ֫%zw!_'.22uׯ4Λ?GOU_$/Zj"3}2b9O3Ρ_? Y}]VAhZπ?U-֨C6AE (fO62$mTr31}/!~Z V %es9(T2:B)F 4$Ee  _ ˽Mnn[vl \\=h* )&JLUjV|jU k֘Pyű !TƬ5+]kU2tM|icl)s)@ UαQ&1JlH)_R͜ߢ1e@{%ma&+fK.t+p0>?SZ%9/oocjf;S1>>/fCl]|NVfK:2xu)1 ZDrU;6V;K_zB؅7bM۴/5,T|J>FF^+;f0Nm 1(w*1Z۬\Ic~ߖOK5,<[p_M'9dvH i%]CT6CjIǭ%;:-HKNZ({J8gM̑|SUP|-*UTDC}Q^>J@V5fΤeft"?,o]/Gps}1/Ѽ#sU_%+sӪ'CL?x#X3|;t?.M/NR\($X01bZ d[;!3ZTt[ƅc)ZHuc XqLkFuŋMהIe++k(1y=šFdSNZgK̨*`P!&.I\)aٔ_Yưu u\碢XXcu($4k*oTBXɔ:Dn{|a -Me !Z8t*:--l`},R ~<شpI42 rPc[ņ͠=֭rx\Cg`߼c>@Մ)\=QB#FT u86SUV^lgy;m]'oM1WtŨ,HY"=*KYr!b4b"f6ƨdU$|t0$k ZEWpNr&Wq{@k+I*Q\ՃYO/o T"軡uu;rs=Y% Z6{5y;BO+%rqlHmz\S)$yҔMDDY5+ )ʹXJ9gPsN*wEF\SRؐGj#S]ȕVA86j V͜"nqnXme싅 qƒbEniyηӶ̾xɿ;a׿^LfMsA *"XWZJJl3@_jP;7HXd؋(=̌`(*N!UTD4W.Uq%zU^ǂVǾMQFoT}R,jk18@=-f6@!0C Ӫ+VC$(daDE-A&8 F˪H{?fv<5<ҀhꋈFDq͊Oׂ%E(#'_(uPW> J %T}60Lʉ-,J@1hV[3"v3g;"z^Na\Ǵm싋3.G\X] r0FKYA˂'hPGd%tZ%5XǂVǾxh;a凞kev[.ȭ' awQۃǷ~S5G?v.{yՂ[7,$h[[f{Z39ysc9gs-5ʐIjJ)<_2Ve9tJ"S;=uݿ_,.uٸ6e]xq#76Oe^bm g3/ַ9{~t^Ͷ zfq1Igs>9ș݇4OF~8y;rR֏<9wY;7;F^3{p&0q“/_;Xbf>}u_A'̎eQDݚt>9tUA:(DW3:o]໷|y򔫧ll*nG~;77$$j_dwsc ! JR0z;RǣR/c^C+*;栿e Tm-![Ɋ*h)*2v\eY[f:)U޳?.&i3z]9 d'W ޟ=T۹2QdۤcɒV5r)SzhzCڔb b0bípa.!j!hT \(蓋P]K<*_Ǎ.iW)cB:"sEJ?τUP?mrZ2-j::[' <9J] Ȥ0V,ᒽs)&m1v>s==#X =#;1ٻ綍$= iVvnjws$\xH.Aڑ]߯"(%*F"K8|M3SJ "*RRg6fZ墳`rcB},7^}2Fj.ۺ H]d?o_=F}tu4軛~Xch+O§DZo\76ayS?].O_ގFbف}Z/65^n9Mt"}I8p"m_/1s6-\Wӽn/[ײ[&Yg^e_< 1r1˙6hNT#O^dMdl:1- råw+&}q:=b~.;:Y3`0zf {XTՕѤ5YT3@O灹 wq>0z9V*X)WMl՟? ,Tm+m/'v<)[Ovpy% JjOESG/:[xBMT([tb䐁hfӸdUX#:SxPb3tQ.7Y %+ud^^T|u&C?7ʙ2ѧ WW0Is܍KM׆q5~HP;ujk7C^H}sl4̆ UΞ q!`2rFQI4wJQrKA,kw Q`VjtT8PBm}w[>urSUPe ]%|>7/*+w\TO:WۃG.{40@{Gڣ֢17y]+*5KJYiĽuJ9 FX68t`fuT|³8q6"NA? 9Mq$:0p"H-r <k% t2/t6~k-fܚnŌ1Փ^˨}|k OVrφuNХx4S\m?F:xfrf , BgȢ35I&h"X1y+RB_*^bpqdup2ˠ}|ZK__uq}Kwaay E[G(nya=G8ƺuz6ˋϛp=LG %T[dd叨+s$GwvDުDZJ~~޳cywՏ+#sVR& 1A}08vTQ!FMsFV;|zݒq :SS?NMfro D. |Xq[.\U݇:08bN3$eny^!4;0DŽHv ;n vR٬lt=VEuH!aJ)Rb 6S$QG1š=uH!yb4N/LnbYk>]9iᵬnhr68%an)zAhDljRA3 |SG۰C+,vc6HwD@4:&NF03,*͸Jb,aF! LMl< ;KŽ P?]O?YQ$B\@maf΢D4AJ~.2` 7u$NvXK>@] e;``b1,6hfRL!{`X|D:n&1] +Cla M`h%jpL0uuUJd3(3w6\^dʓa"]eVA̋0yaۧS3NnQ çEU= xԇ_u5o Z!XT }!܀dz\EIÄ"YU;g.}>_>=xTxTƽL.=bⶲ_6[dH,_'" S81E_.4+3Kd$-6k܇t"eÿ`` {1z$|o!0vŖ "EfM8edt @$>8~2R2dž^ZdgXn$9K)(ԝvpsVLa> *a`;`kS!1k˥CRrB\wkU*XT ,./#ØR˺IǦHRWVU۩At}j_WY|JQiBy)f6J^Wފ-YE6cPYPPfW?F ܙHKXALi ڶZߺNhZ߫A^ y- cijk|^Vg/\<`v;l\^LgaQtgU;eUD+6ܨE2 1ol!x2vy z\(:jrlhs~9-4<0smCX ?pNf9#LS+C.#  pqO?<}qO:<-[:!tI<8YlX;GC"řo UtP wCuDs'«cD? ,ȳIĕ$OjuIAJiDL/'M2;Z^7xyKcIP|Hu24H0DcՉ''!NhCtc: S"w&I* b@a:b飲yaVE .u)hIǵe;Q!(8v0)&8Opnrk&ϓdGRdju:ґeeh͹`ܧg B3ҥh/mrJ3E|3+! XGI$r/".UUB)Yiۺ$I ̈́C <;*2 YbZ3"|lH%1QMKc5o!cJyƞ!186DwmI .cuW 06ݜ;`7ѯbIJ_̐dIQ-ǀ%jYSS=UIF\/z)ΙF"X>@M\־;pZö/ o/↔>Cuy]`d\0P"iV*&h5O&aG<ڄp([OTt)# i2 DY` 8 )RƭCΕ6II.(υ9wEWLC,38*jKXPq6{p3!H)bRLYY$Fo?}t>=^PjA!U>46ݜ:pk^)[`DNh $h,h4p +43$@ˆyCEr9V\3Y("FxI.RJ/EJJ݂<9s־\ eOx|"8.FxU:4vTi->UNMC>ZsyD5MVY&SBrp(L¡I1H)ŝ.X0F3SYtf"2FSHԆ;MmR$D;MMq\9C9 DY &r bPnCJ`P[3.ϟϖ0*(xh`wƣȽ 6yn%BlDNʜdYY1r ]ݨD HG4ςO!ZGHУLkN%RѾtRl7x2mj;"-ڝmk@Ŷ)6N&f*Fmb^ϧz^Y"z|☜7Qndmݹ-d녭9\ E+jPƱTd8t4CR6Ʉhk*P(2JR9cJCbzɡ*$SV!碒YJ A42Sgb\b.z¥FW=,m}ov4\N?IcPQcFb!P BE:s$X~[=PU8$xT;CFfH9zўEXPQ HclX$HGeK.vb Cy J[ ~&4'5@p-(\ƦX 0<0>oD+S^YR24tt$ZopHMdY`'&}D @rig04NRX@sT9*x AZ_zb w#!vX dq5؁-k+ le/E3\:81Uht>snbYG Sx&0^C!o{^{+)c`/,0#d"VAJpct$ 9 OtQ'/ZGOJ,:6!gIt"aG Br'#9cޤ=6+(A2ڦԝHQimWp⳻R 5T>w'IRƥ Y xTZs>DbK:hH{;?l]`|蔷 ~K{)!"ARb}`uQ*m=ic p%pi<\eK8U$#zgaFo8/G㘝[xkZt6"{}zKpB]~SC{mpfk[ὲN[=ƫuSwSE\&%廍דpK1 tt ?}b5I.((0nT;xEh'R^wq~>~"|r~ބ16o>Aږ;ƚ c:A9=#ARf }a6awJd~-\gO,S9gr?g0Muv\}<'?/0?>->rn7"z#+ kn > ȌN|Sycqwr2=9z:Nj?ϯuߜ\Օn!tS(.=ήO_4;{{s0ZVyG@պVq5^]toK'xbyu\Ko[$ިs |G͸|2v:]T?8wJ*rڔޔ|e7tldgXns5?dKօ+N`nYԥ~.{:h°Metf0j~1 u!c*sVIJ`]-tWf!vcU ]B”KRK8RT\7`p‚$FB;̰_-Uqy᪅b6 ĝ}s۴Ўcpi^ "FdT9mBB^>Գ88?rťsߚ,/T10L^k[sP;U<T?U%ޔ[NƳ|=3SU7y"F0&6.L>檴sxcpc~~2m[ԕnW//&^¿ُ nsST'hl_i?bia]Pii=QF3Ij֤;k&,}W4dqK_|W5})n9D)RO  411AhOZ#U!kwgϪ/"f=S%GObu=2 b5d~wq\ͱ]W}YU7~G>ǓqOhf}][n$Q@+Zy!>_o4z8yx $߂_b4'2Ԍ1WskdmlHuo)x|xiKemvEhHd;0'bZFaƴnƣY^IQG5#z|x!8JYooE\ .i‘]>&-LL'`B-˴?Uh7qG@E% n97Bnzpɋnԕ69]oɧ~[a_xs?WT_|WhX5J g~٬E\e, 8H)[? 3+X*~6pU]I`ypUኤT\tpJe-\ 9?vUs.pU}pUԪW貭A-jݸq[.+wW.e:y)s!dHox &mp he2t}/q4%?#l&-س"<˷HkN~Hiyo%JÖL:8 $0 [HJ0=JB-/u(ַh!J3wLVz8ڒĽ̪MyϾd_>1ۓKQ=:RNk3+2n~#Z_q/eί @mz#g ޣ{n7'ed$N9]4ڋ$D;ujqFH/>jՖJRmo6+ntQ~JtԴ(} `,Ϥ{׿Ku/qOr}+?jeݟkZ6hﶟ0̦~ ^S=`B!=#V~mZTkf׏UpEJ!;F9R \qUR \,)H`dsZ\Hһχ"\AB ` ,":*vpUyJ97Fvlz\&814)Չ'hWAf~ K%1 ֖<1::C < Ң#479HeI_>cb?(mʟ~/.U=5G"jTY~75FeM2720xmAIQ֠Q~lb> xyv?;>ӢK}SXgbN!KDV )52ED\xF霹YĒk%E#r첋.X~$Ous;/SЌ!=Ea#,כ{O%fcmKoW+^-Bx0KjnpVqC4lARc,bvqkx\`H:o}p#ǒ~vmT`M,V!l57M*Kk%ȣrUM- rAX"JyYre&[`{NЂ!ц\;At5qv'&n!)և.'IS{):vy|_Yƒ#ny kZN56PR|L>:y$ 藫3oeL7B.CuUrF V7(oаL?KQlkU׮V5GjUif"F&5_g]D(QHY«)*sإo&7;jq~5zW/VÏ_blyE>G>D}KlXٸq8"[s}.\vymyFO'k8|y7)lS4RuV `wPp$X@X"mF-0f *KVsD RyHAJ_ՠ ^DA$DʂHX"B'!9٭C2 j6h[f]hh'gJq\n;m]66{]ظ/ӫ=+u"_2׫k6JNG3&zfq-db(%r<)e,D`:k=U'37@-M<{Y?<FӼoylWwηoMm+#mI+ X$?3 2瞆ƨ?YnXjL$(7"adI4.CɣVo '\4uJ{մP\ʸ;\pqg @ҢByʒ:V1"rP,HDI8._Oiǡx(*!/awFۚ|LuF CSd>{OwK ss3U7n,.um,uhz t=ξrhQE"'\76g꡻ݽyzɎFnd ~l;s^7n|dWܮ%ms{CZ}Gr 7=hx}+..\tq(黾YϫF.^"b[g6GuZG-]%r'>˅\ 0U! `Y#j,i {ѥ$}!#}C$t{rbidwW)Yܥ`VJz+DJ\&"i1W7f6T=OGz ՙ{Q: N[,E-ríZ6|J Moo?j,NF4UAGSkT'0˜ #UkJZ|he4n y3;\k Ay EI1lrK1$yOb:}B'1;m6p❙XBW܆+)eF`2霘gZČ%G{uÑ6:"<Ұ.jz0KC,ǔ^g(̍ S^;%[-q |Ѡ 1Q+ً$@=m=RzSζ?b˾mt>ۈ~2&k|{M -չ @[HQ d2J:WA(e (cA j:$cI4ɤ!V裱h.R.4!1GR1IAuO? /A㹄x6t󕂽ryziW]k1{9{Yk, }ZlTFS Nu]W0(0bs ZRj=&pFH\qO =1#.1 +4X)0TIePr"+rk#Gz&-be}V4ɻ;$E\涷Vp\tjzD)so˸;K<ۈq+\FWLTI"q Xs#5X8@hO<dx{壖zzꘇ҉Rb$u@/S#LET<o!lQʹViE<0OR׍%S<1st~#45`Tܖ b.@Fh;P8Z4 3-=Gҁf}+ӒfuSM(Iz ^JQӖ> pYfw KMxE/L_;#b cGA(aA*c0GDM Qy) iJi|?r;ȜHx- h<HlJs x ʪ~8E8,L>tz 2*c"7c?*TU0.G|wq:[]]P|>N碿^j.ۺ H]eH j2^?mS%)Dm6kWy]ʟ.k&/?L.ف]Zr|(u*S[IM&e:٤9p"m*{ qoE͏ej[G!}ϟҟMdno>{:ͳ3*wasˉTKe9x1 ongIe&EZ|7Pkzogtpn1d?/:X3a nn[`l߹x]~3 :݆yN MЈ+I|/`\*L)p՟?LY(Vھeh=7Ye{u}ijAU UBk]}% EfO%qP]~\ש{> &`N=SĮoʳJa#ᅯ!Ie|}UmS wz1]_~?KcrU헓O͟wWul}q*3}}DS4{sЕCem We2U.ζuMx-Sjܯ}v (rS'|'ѧ$n_Rƽ($v1;3Zk kﻇܪߩ+W^>U(xx/_j|d {y ?.`:~#C/3cL >6>yFusUNj3Ri4IA1eTyOGfrɀ`s@gȢ35I&h"X1Sw'uvbSwT8)AXOeXz^"j%% jFc<04:50w|s!sqy0KT Bz"f6& \A%\ 0VHM5Y^ZW۾ZN3r7:uϺX*`;q]OdX/d^_.ބ8~`? XT ,U59s(DW(:UOe*mN작̓.t4Q;mՍ{?Y7~h֊mBVR& 1uKD&p-TkQmQ'(߄ogs%FM52$Bۛ0'2a(!Jt<4;\R:JAgXE/Z\ooRo-s"!)sc:=rLo7?qK^5ԘdwV>MF SZEN)WHc&hP4A|DsˣEjbiy'\JlB OǨU1kQ Z6ny.`cpvtqشEk_B2t9,)PX~{ơڙ.u:J"Ĺ F `5p%J)R)d_z?ҴD4Pk4b'LZX iP4)Bp {`X|D:䨐1 vXC"pa M`=)RJjpL"|Xџ< H]P(ʅ?_$*{[d61^e0|x=N>;/,*YwξLUV.4?7.<`yd3j' Z!XŤ}!|)2i!1.) @>/۹pvkB"\zb[R$M'Hċ4E@w şȐ{Xv8UR7IqHޗȊL>,0 j{!]JxЋU:4qfE1z|ė_/kC@v-NUZ([i`ɕ6nzj Z2R0J$mDS+0."85I.Z J1uf>7vbe"t^} wDkSFk@YK r rUTd E!X\oqB:IfJTu hd}*_Z4JQr,wK_S?u9}ڏr^=Xޘ),nA߃$j3y8pyHro~>h$yYMr0!UY ܜ{isFg͙(uc$r4BDJ8`0 t:GN'h\fRɋkM'Qp^Xk*b5Md6'XEFMMd>7}nUd H4Lx[BxBUۣrc?hqN59qF S;CW .]J{:Ci?]%;UuWJ0zt]tPΑ$uĬ3tNhy*3+őfCt8F/`]5EW EWwPJҕT.iWX)Jp) ]%>1WDWbǡjfvzTQ-zso dovB]je8 o^eɢT9&9FX\h&a2Q,~}}s?9MX .F'-aP84-z>xn:.Da1܌|^E8& xPeK:7~޿nAR/ån8eSi~5t 視ík*QJzFܯg~^U,tg*}Q׳*դt("=]!]%  +)qg+@%o;]Jz銁q!tGJpyg mR˞ΐ`1K{? 4EW ]ZT*$9ҕ.Hv\Jhj;]%\tut2"!JEwWUB[]J{J)]`Mhg*e+tJvJ( Ɲu* sJpIg՞P WCWrǡ!dX]p%"%{ܷxqⵗq~hǓbbˢ5<^݌:T18p`9/fQ㾾}p_t-(I? $}O\ES+06gF\s6;˜|Į8sOFayEf\qXtY{=1Jz;kHE>גtԂ4a?kA>`rF78Ŝ9s˭A:F ~Ѡ> RNܓ@-9%JٲrKNܡCKt%e+NZt P{:C%P!/MUKIW*孧Rΐb/Q9J ]\TW*%tPEvt[oIr !<22#o kσa XXmRԒxƋ>Ѥ8IMvQ5zfu/Nd]ppel2Iz\\Ji*W#W(+Qqqi{\q"GrWlyw%r-+Qy:@\n$+wWPԺvu m$FS \BQkfPTZ7ЇYj`~$|4cjJ0Waճ^ G*pJԉF9z߄v& ܠ%xɦ3Q׻NAI:\9I0+7}9Բ^o6}6đ[5a\ZKǕtks29 %\rZ痎+QR< ^`{p%rij_ `V\ nWw\^\\ ~l?_N&X}ͫr4}Oڿ N_J_{R'fͫ7_ ߻Wȇ0FƇwKwlփ/?w}s}g>t{eVn>kȫׯWwkpկiBk pp0/m՟oh-3 vyo'L 9p)ٍg.uPϧ=.o>":E;_?oNǃivo y"sQfYCV%ݪd0*}Hھ7Մv[;JZ_LOSt4K4Nt\1wn$8@ 6WE1Q*rD9ӯhRV4XhZp%4э+5Qiĕq^ّo޿ WPkqW+i["0ޏ+QqAW+&5DpT rWVǥJTZ "@r>* rQpﮠ~X3R1(8\7zƯ:@\l`iͅ+Yjq TzV\ bTv3h\\3nJT.q ++vzNkƬMJB 'Qoi~S"=X[TZ`ȮWq/yk)q{0|`=o>Q ~k2Q¶jwp{j3; %/ W"vѮ:@\ywL0FZ+Q5W/+VWfbeap%r|4|\JW\ rLf \`gjkWruW+GV•v4 DnZ&t\Jq0R1艕3Jz7 -Wx*W4ҙAaW"5/E%WHq7MW". WD]/dvz:QZIޗڧ{jڸ'\MRԲ4WC/HKDpDrk)5yo% 8?cƠvl1%L~w5X7;Ӧ"u6A<.<E89MpC&5zJN:JNT.lɵ{JNhp4a\Gxڭ:@\4Vap%r0>,W2W+6Wϧ Q + :+wWW\",+z \AV"Ws AT.lW/+ƫp%}W"w{ OSݕ\,+^W}4#]y*apTÕ5Ǖ!* "5 Df]JTJ8D3p4qj\v5M4 \Wzm#J[6a\A5 jZ<2:H\˽ᏎI{r[CQweܓ&{Ƈ21;嘣c=߳si&0ۑާ7͛ ?n~+̵&0'A9JfWA*ϏzmǀlO [/_bS>o?7pԅ9]+ޙ|Ŧi͊Stv+o]}*g7?mv7ͮP!~엿88k>/ g(/A>nwQ{OXG9J? ?(& ^ [X.ykyFl yQ7>o̟=wo!<1>c?ghCFk]ӇtݎkeycȖm֝ؒCIrm\05Y:ˑQQ6݅DP2?o]_O7w4aWk$Vߦ~h]m?~鞨(=tW9&kt8.[J&b6z}9eЌ1{BR*ƨTrsե ݫRyb(llڠԗ;yAqVPc#\Hzt*dՂ6'Fj9R:So-A(V1J&ׂH5NWJ=hRV`b4<5ŇwTbwkK8IydMs85z-pOKOIb2c֎7&{ptL\b/9%|N^ OD4!9R[L#MZ)ʕ< ۽r#2 IWDISC(]Bȥa1I{'mv( Vu rB'z]#6τEv<&C*5hi/<`3<1'Bk ԜssYUȫ;;]SI{k S $mM ;H9iE%xu})9yX"َMѵHIZ%9įZk|0 MmDiivN"7&X!֪3lm6m'r05[b>#3k^,XU!kՕkʧMpS7B 37%bfC)V 0kϒB4BF( ٥fiLZ%|W8_`&6XvA9Ų- OY`boxi0oL8AĹU X.*]/ bP XBs+]b"n,4DžĕfPc]=^џ. *EUr޼1 k9+y PPB ׎]SpP|hI']6\@1?؎~5m54FXYbX!]AАV8p$SS,`@8׀T%&ijT2\S j,+:NH&[Œ 0MN+K;.SefX57]\?, e º509 dAPDdEؙ\mѪPFtQ= U,,8f`&boS/%JC%;:P% b->AΛ:2 pqR CP{s e*3[( Ls$e中bQs|x֞e(Ez@ߑP֑T8匂kc7ՕHޫ]D5=))"FZ.zF3y|׍a!ѿ99oVh8(Sl-@5E@I ,uWH:мGwU+czh2&m7236>J/7bF\uHNO*Dͫ"Ӊ1Mc¬v݄w&^8.oձ,kgr.~]A7&5l.@6=DP 3 o f9.\S\M*)ҕjhJUaİ'8$;෋*%tF\8(Z V<@E&rZed^]0P>x!kn #.nc4/dx $>yt_ 5P2&ݚ"[`q;2q /=&UN5~d}tU;k lZ .k$ Sc~z߯.݉]@,M%C.z*2Θ} _{`#FK|wI>hAJ.9t '0Blk`)`1E#bZӬYf}JhF;Xut鵈 +-fmbYL[vVhň޲3AKgyB΀@ "25;5v㷋EY!3S*JJ+12~؃Ra*@PqY]Qõf5BeV]!۔%V­k$+,Iʣjd h@eVo Pʵr'+ުj^"[ZFFmUHfP}o3d ĐZImmzqn{q~Ώ7i.ߵ{:LHW[.n[m#'eJacx0ND)JHu6tk*ZεIP'BãkBm3Fv) =}4liV{ұR5n(a/[tؚb樇 |yB1C[uŹEͬYf1MA41K p?{׶۸eEhuL'L:-K$%M2ݖ|i[v7EȪZvU-n~G= 5IX!z5_++U.9. ˃ &}*H#4Քqe ~Frr5W)P.5ʾwTkϺ[7m%u΀/e9&c^QqHNVk3< 7dpPU_<4P0uLꡋ 4MBӢk\~Pw{ ~1I4*afШ$b .gTL5idM#kYȚF45idM#kYȚF45idM#kYȚF45idM#kYȚF45idM#kYȚF45idM#kYȚv4J݀r Yps J!ɚv ִ@^{ DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:^'^!9qqP@T!NbrsܑJN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"':8ޜ Z6'@@98"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rz@Zoůw笠^_Nj(ͮg@$ !1.A`KV7.Jȸt ƥ^H7j@t_]!\aBWVQʸvtl@tB:<=i#+sb@t_\HW׈u *DWGHWZ;`tָB:]!Np+C +kNWR2:F1`29B:]!JIi]!`7ufl0 ъ]%:B1gDWZg#Cq{0 њ+Di:t%Y)%:\gHm~ýܠ.vNj.]5b.zѿ33zӷgթ*gpfEg^jEjvl*gyFIoj~㛑Uɲ݃*njYעM6*1D=T::kGnb@ 7D4&&RڲE4cP+ ]!\5B?(mI'zmEW ` BWVC+Dҕr־ $` zχBWynġO&z^(>$BVC+@:tBV㏑7RVr8SW QzGtute!T|@tT;N0hNWRS0xt1;f#`CWw8gOW[ ]GV:aLW[NWO̕}cz\ޖֽʠz])^)k`L3O01wB'ܷګ6g\\Mov_|M@򃲿?~we9۸SAڙ)t ?4* DVlԺR>-aHHn۱b|{]+n*Ags0^06Ƹm%Ǐ'B.;-u3}% r}97iyn)nk\%Y4a4S捚L $ t~]/lMPn7]&R:M1lT`!UMbQ9\eojK)`nR.`,Zf\jqsf ?~V*w %:-p+{1|5Y_gj 7yb u 4*A|HzmO|4WG~lCMIi>nfg]t0#$TM.3ew02㵆* D*Imp/cn3\HC>̪Ѷ&F)KX?4F*G\[^.|S[ƛel)|i{8<,yzNgXI =G "_*9ii<]MAjFaڥ7<DU-=z{}HPڻܷuӣgEFU.ЧkZU-wVּSExULYM\-pg" Pц7'%?E]q붾g>Ypax(=&h:µ>~Dxu=?nGNSy@׭[+?“{+Z!\95=Dš!JiM8wJ#<]8t?|a'Bi%jqLV )v0Ų 6jIcs}3{^]oĶo O'_}ۤJЅse[ˬMIkU:dt<O> bCZ0-P K ^ |8o`7,w…1}sY.u U ip'}uۦ(4C^wEoK%EOc8g 7h7)IƖ\xu|Iqa3Ng'qu;0K/Vv)3+ Q2UL-,*+ WnsuѨlW֠8KxyC6pQi}TYͥ8IySF49{֋yX/\&C_7v&_8VA w(M^~ w6E.¸OC.FBq1RSڨ(sAge>c$'SY^ 31I턶R,KPy5 .BS `ނ z^gYWzp_!A#)+EUhHɴu3嬊Ǡ_>MJ͛tjTqZP?h5VFm`M1BRc*D\5;_QK$b3&BؠV*KdWR*8D |3&yh>/`C#|q m7i۞Э.Rn|uz{nG2=H)dh]^\Zyg,x@w![#U!2xkt1xr97(0+^X]B'#tX{}/8:/Џٲ 7ۿh3=厘,+(|X{/~Y yЄIfX/ncGejμ5 MWR@yPJu8^X(9W~$M)UњފA!AfyȦW9jK ]c.:KQqs 1zTᢲ\C((Lr78 (Ҧ뎽_mi.Oo6hUaE0)JZq„T0d,זIQΥ b Mz@$[sW.A$I 9 Hh-s;jm޾+di7q< svⷽUɭmo=xжsǿqپ`ǛК._.g]TU'h;@J io/lHWV~xFxw"8-W[$4"W<\m(Bs.Z89`y+3gk5F/2*)%m> U?{Ƒ}ڽH~0pd8 {/E2JK4ݔ({ SSU]z 6`̙rklqVɦ qơPeօׅGՅ ҽe|ڝ;z\{jyA5562¹eS`@.0&\`ͼ9T`3 j-$M*,efHґ04G"rRΩEnm#1OEkgCάu{ v+Ǔ"0|1PpatZʂJF(>\H,!ehG]PAQN: >HEعG)@:HL89a StCR'sǁ׈Fx xMRAx&$c m"`A Ե@rL@Q0{,FH)QzHLzz @(4(IFbRm195uGmA/Ve+\%EY/^/zqku$!12Ȁb^sđWDas<J`&u(zzTa68T.7U=Vp[;5AaM >U9k~#oNZ K`\PI>o8ޘvu!bMO FD𚦲h L9ه9srOks&۵fwLOf_jnB]dm:z3 l60e3zjǤ1Ap`f=!,aܿۖwO޶c救!u7v{JCij|m2WVzKko>tB31g\rg.d[_k)y Tcpř2߇>HO#4*RԂR2%$Z;?H_8IO.P.":V+F+fypa6IbR 9w9e5!`R dT1ySt4DQl=\U;[B9l=3{9A sn"g7Ho)%/4 YG_o7V}ŠMN#}9T~䆽Lufٸenث3L+jbV41r z5MLobdob61G/{W;*ݔm WC0+g0fKR2|%gS4a{v6epJ-uq7Ee◆G_ew|jYap9KlJa x^ @1[,Sl'”]kQMq ӧu(; 8٧xb܇:( \9Sgvt3?:b6eƯ`YE H*S.a䂡V@c:@ކmAzl=|!HL*˫0+I`RGTRJmɌ6>imw1Q_zfU꭬iemgzbt߆+3V7z!KSB,r~7 3 zTH25?*-% 7>ZI8([v%m Er@& .x9A NS+E*L< 0E4^JJd7DƍŜ(aH樈l쎊8MaN)opn;/oߏjb&.|4%*r?_DȰU;`J/-S.x/,G{bL" K6^xok} ;sA)WlA& ܬYkf}7 1ߓ ܾɲk6s8 Lq\\ɤ7ΔH+͂E)㙯4K]\ \~W>VvkZQɬʤpXP14&1 %!i\K0RSrͶZoJd*X~.ÁnV|$͠ ;tઅqf{8ԷWWb|xYk, >G-6^*F+m4G$ #%ZϴF8FHܲqz9R`I "Lj.e\)k#-#md=% OHoλXGz_%2Jm0bw-nR|{1B;n˨5ʕ*8T$.x{nzO3ή}o|t^|q1")gI)#LET<!lQʹViEyhd7L|$WA)^6>uKEOe B`mD$LKOt`|RMꗎ^;uݴ}C2vIFQnd7 ܟLR%E[sKWX#Ø2FJXJR:D8N Qy) 4O{j|5~Pk\i=f)v.)QfuOM NBԍPz\R)\=En7o?< 1J1+hNTPpYËib+}7MQ#۷EAt/>crqn1?6[3,`1.nn{||1\jO/Mfx.[ԀQ2D$e`.bgB.>¥_GLaERrxL!L%e/*a0 ))smFLBw̭0Z̽I={ M[~SYhJUׅ_\5RU֗?g$Ő5w bХvDiE6 i]_?U㮋 kV"vY|(t yۄ%.e+pg`__o)6rít%TmIf=a(v@C:h$nջu]?ڲG.=vpxT4bڛNMhΊAb>MN9L̨-n`Yظk؈]rwV+ 0]P֚>Ol@@Ojv²0lI[,g$Y>*q78h:s#&} 6{ځG&1oб@{uأ+b2vƴiT G`h1y'3hKV{2?`2xQip)ŅPBEfR#9{VEV~,m~qM2(|eϾ;0Fr+GGzNS2R%\.vGWWzY^@*&<o|5%+GK O^Wr[-?'/ϭ[b9Q(%-ctog ȢX25$eD}ճS';RAyM*j=K/ jFc<04:>ñp]ѹTr('@49pMpb0ĉ8jlAV _=~ >n-=v}JgǮMEqM e@`QJ=L!EA*J&)l,Tѱ\(CˋuQ*WYկ˵9ΊsDhSk0:Dk bWbeoE9ke-e`[¼&BP5;S* FMsFV;|>ެI^9;8Lj4Z5Mma^5}Eف`ԫi͞fOJzفJXߚ+lN f̷&|mp[pJdfN]2$eie^'߻!t[@︵霛DKk,3T'|BR(*L`i$7J0G>Ӽ.ƀ&LdPc(,k ָRc YSlXh:C%J-V;=Q׿랓>!=]Sσ;zS;tSs2NI-ES{]/HmtQ)sRQ` u`DyHvV _u`qd۞gkY^G*#R"%1jTfi%1  Ays Sf(`ܫ0 wu^^Xa$B\@iaf΢DAJ4'Ya↺_SI۟kkG, P[kG,f%,B) {/ O/HyT!c4A8 1Th=րIV9`fZ9Upw݇?x:L1.ܭO/u٘JMQa,(^ޏKԤi9f />NfU/.L? w=M30$?ABXT}o 3Mx2Is 1jT&)Z|so?pM&$D-nml|7nk@͋$|X=OgdH, qI.Q:Hޗ0+2I40 ?x ՚I| =[Zcg\B0FS͟{|UueWmp-j6~`M_..e=Er-^$o^^Fشuu'L"4#; ^r;#ӔlήSwf*Y5m, u~7_  ]:|YƊ\:$UL.G.e$Sjr0~T_7LM=1E9[eً\<V]Ksr+JcƣO]%lHU]R)<%є([SD|(6epi4_-qA*t ұIvj~lNOu?*NqK\ c-,_f ?Ej<~gGɧXorۨdGFcnVJ^3М+GsX*6o(ycPFf ө,0TLʔ[v4ל-0=ߎy gD-x5Z2&d0$H>/2dk6 SGDxB3رx0m'̯Sbծ1裗y̷fDK|C)i`x)/t]8h}q_Z^g2wn@z!tJBWxȩx;)9b/zSlM}"PNKUPlf+=bX*f*!CdMqR!I^uCr$(m̢BS <SD:Hjfy}!SK݂>o(Z5ӿ a9$Y.{k,S}-.e.vB5t2uKȳh>l:J{/- +]n.f{-.J.n++S}.79J cm}?L zO7;{m1@Qȿ2#R8m|AW /IjR!JBi"R G{AB>>ۚ}L0?#.j'}b,6|ީ68k]]]sJܡ'6tC;tr.wr.wr.wr.wr.wr.whܡܡܡ'K0(]]]]Z]W ~υ1E/=F28" Dῃ" uHH D  NAd\4a. 4A,eՠ Jk JiD:TWLa0fueă1`Xc*6vH $1?d-ma–y{T$cMy_\Ջ`>>ZX@`r3ʋД~5fy@ME6 А.U%!@C !"]GCoACz_mMwla̲ooN?&1ymaTB$ 0eӥHE%DŽMUz$](z^gO\^GiC>,Egy|vՅׄT>=t7?rhݫ75]uX$~:ƾPפgA1חjӚaóMI>m&>Tc[nbP@PP/'RFu֍Q'd쀐p@R88`NN0 .^u4xzm#~Id%E*4RBRVXIDxE9-Tp2i}5 yc7Aknd`ޡLJ} gYϐCCALftBu Lݸ:\Nn\-cAeLО?U<_FM@ dbb\"e^T*厼>:.iK^k5-kd}-ybIXeJ`6%Rf`05Q0(E__'IrvNhqhHN3٘6vzތ^(BR`d:]3im0ޭSIͿDe}Ai_|hg,w<_:QSNt)aC 1P+*e,*_P_ &fyT!JW|J%B Jg '_t 9ʝU/usSB\db-#,r,J&V Oͱo>绎1~WIWGm`P߽8'mpG/yf'5؋'?ތ&HfokrQ ~@/G<דg${;X'w$!Ō)`A`Pj }`Ub j #bh1T4`YؗdI!"5g,]bh(C>A#&͆Jy</_\}r]}۞vBzB%1M+1Ιy㾥?[lgTSb&y)XWx0J#9 *LI5)$%IqtS Y"@G;s605b,7,,IM>~?O3BGC_7?] 5F <ƣڈCIx)ye'&T!;=iFuN&0Tb&$D1NU90dEFS[Dҧ{m>g 8႓s (UHч@1K("%E/' `fy%貏'N۶|Ds{:-%mbenOb?Hy2pAJ:Zeb%^o)욹#įvvRb-ZHb@@F`Q$JPc).'4eLN:-H:-'APlBpF[giU.R9KkUllqI|[E&i2$Ve2EPz)DGoHɨz/p~;7]ԥy[>[ښ:^SDY6Ϊ[j̪x 5p9^<^  #?Ό.)Yc囙D$TG;`HDȼ' k 4@6G%y4$)d "@Ci@IZ jHEe5*vllXV~T2Hw '1$ (&3"FC*egbD *Nbi(SBH2}?$Z+0ϑ ms Gc͆}?y˔륵dc Su?RIO_-9gR]] W,y5ڿ0Ue H!:CX"lg{bMՃSUGGUF4ߟ}}Ox,^=^YZ\Em>?jvݔ> "ߢ|lKxݫ>8s';5yK$vrr,u[Ž/mn1*:25E`*_BNU4)P.՞-hdūY7NaDas}ߍKIWb̏_#ay ֞,FnGane a(a@%@Vc9u&@;YAl^^~l)Ty6nT.ϫ?>=4?䩟Ͼ V{waH?V~ng?&'d=Ŝ j)<{QkӊeڸvHu HE RfR"TГȧ2B_R IpJCzªQVGY[Qf?{ܶ?ּU['Mݭ/o?l\yʊ)R!);C5(NE1Mw)]mi=LӺPNnA5gBsX# K3sG Bs z`xs߰hsFז՚X"@4V$(Q@TiEi#1!9"")jMƧygkGlo尞\)cpKt :\ J 1(s:B4.&g< zzꐇ(ãrάxDJ c=Sa DbT=* 0|Y cA_#ǣzf KE 7Yۣ2D* hG/Ť~X~^W-G !+CۈRTڲ;s w۝BR  r <ѵgƓ_Wgq@?mԦZU U : OpH&Q5L9CJB\&%?Cg!ǂQ@է2k憶U3RSjg94mymٶzhJurqNIWƼ~/.r Y{| .f^43H͐)NTysAB;Kk2i)kwEr}cȷ+ĉx]}U$t y \>EZF_WʝN''Fa~|0s1|Z6WWB.$ )+?"'`35`^CqO\g- nh5Uy^V% OqV3jXf-nDY/9h#LDkC_ҧnu@@5n,Lڐdj'IqLΑJ>p=ۖ-{/Gg/8ofa=0 g6V?br-nIZY_he2vn ӖM`L롽ڱ |[NYGˊ(}7n:,*J$W! >eI;L)JW__Pg^sh ^SR@9%" "N #wRK. b!ʶ2($^@Ni]^o\svݫN_d(_48'J.=Vs awel+@UezۼL3y%:]*C?yGn?_&\3\݉`O1ở˃I>yݨT+ 墡LTw#D7rlY?$>OvɴxIYQ{Dt6Hnncwȋey:疕qjC} Ej'tg~^?9ߙ]NF|xSq9iV a±f]xNUȍrE'Uɧ=p.yDp+ \ery,p*S 7Wk#+$XQh*cLW";>zJ yp?4\ݎ\!nGz vT#+y =\!KqZqkbt;VtPX5~Ϧ/LTD$^//Vq\\!>whMIEԀ*oP)/gg6"ajjGXb=^eQg YPG}sr3{dx7~~nH9> -n/n)zفxM0"\<Κh殚\p:q'_)ܵy~Տ=ߩӝXSxp[ .+94 )׀TZxbj'ypd)?]@4,Y`Txݪw-5q ,:HD (!K#/Lѕ .hY_!r{E%#KUi)!kn'"&D:$@+(FyL/Ϝ5僵qP(I!F%Y=4F G#g{zVԀI"Dy#3&k2xxAB\)c$|Y&8 &,8eTH4"mh QXVc/UcKY.2Mp*Dj@ mw5.\oT6+9PsI"USHSc΁Ɛq1`ɰlj*@Er0F ~7:.CeCiR!B]L\Тs=i+AynFM6v#'rsL:m+opQ-wwRB݁Z(u'6Gߧp"*%.$3RDrhHF{sx9< #X*dGM$*JIΕ3/c Q>n!tLrY>Kpn*M+tmsr]]J{8?ÇsN{oUfRRЌ FnIʾ0T&4r)UX("D#k% RTkƤWܢP%Ol.7Y_OYZg.=Z-K6[klƲ`WYu6(oY^I5h)Lk-SD(N̑X;Gk jUJhWiPr~l%cPEKyAȴFA9T8ieh <:43Q2TNu"RʶY1L 8&.x뜡BD Dy &5)*#gGr#dS 9nZ^TB]|tN@`wƣ}(ԠVlg 4 '#8(цU((p,ie( ZP5pZ956 qձT9+'h6\yrwEorkftki~y4.=jP 0 IGLIdrd@}u<"BHΑB!D8wըsQ-R؄ UCm1]} c!XxN ˬ)ٞlicjWS߽5ygW,p.! b ’T[7$bRG<pG)RQͲ9x س,&W+3343A'Ąh4' #v1r#Šy,];EmSMڽY$%删>8My~[=Ֆ ?*ql$-d @+kXd*Qx mwَQ?ҟh쉈En{D\W5 PQr7xFcAY$jFъ d B(|(!HIS1Kg\pDDBGK ͭ9kt|Ÿd_\dqkƼ$XEcE7H PQ -'Ca1ya9 hяd~oHs6b΅(`*ÓH*0sh.:˩0a_C\q͜Yw-SwSٗDn$m:bSYiL!OYǻqLC;yz\tˤ1;o\fy[wv~1rɻ;̼2r3?ԷnNi].sALrm͹Fmި!^4:?k!& [%Y Zh7~#}5B(tstrQNF%QQfԠsc2QGeSL̄҂W5B4*QǸD]V]o"IWJSvfF>FݞI͇Om `{G_d`)jm EVTfD/" mT0L]0\]1^=^`}E73^MLJoԛof*o~7)Jb+AcyKW`e"йۖ-gt$R'AKDuVallU~t3S3W){s9]vU +e4^n(-/c/w ~.{jDYUki)+*>;祆9@Oڍom$ä sLxʈɕgup 8xV6x&y|txNv>|v#y\$Żn<9 ] wf38q(8 Ut(HgZ rG)3DB-M _ygLDe0Q: U(mQK{ls5+]qJX FY*^;T!VYSgEmJw7P^ݰ! +y[qu>{@MYk $lP+^B5-zp+ Ց #@*8+CϷIL"%[#KcC*њ468_ Ho^Xsu(CYٮ]:L^Zl;u$`HkmQη[N>Z[1 KM.sev)|zZTiHMOvs˴rqF4 r 6IP"Ja:&h|yT҇UL$А &(L(.IaM%˩y u$Z;a6S o2WxBpNJ0nBۤW\'c&:B0.@d)vh2 hT=4!P˜J > 1QIb 7FGbiEdn F cPG^ꙕ,6!7訹u pdm1iDՔ=QG'Ica] 2+h8d0QOEm[v}8 ^jCg_mWƩȝPsNk;FBt)p*Q?cչH߫;8aN).`%ƣҒۡ uZPoW^C wś˿̪:|?Ձ8_nu?XϹf]m~(=<[JC/`vhUpn&ftF> {GoPt{>]GO 8k;~xZ~ {.OF}_RR%.R+}*M|z W#8QUi[S)ójvIS]Ft;MnFs"(99~iE#]GȔ?(悤Yw߿¡"x~~#3_?ys!\?c˧HN{>hry? {7ş۾9ֿ6,3AR j `Z|㞾_gq-ǡ 2vb4,v?|UiCLf0+z;civ|G(L2̨F-0We\r7SܭJ{3M=2dI6Z+~>U8IExe@%;$)9Gvj̵yRmCBy3lL]H[nNw']0xҿv{~I[|kдq9^vY!Ok6&~ܱb'NS?цoYMexjm{bHhPVJ/BJ-uhKeRLp2II;Liŕ>=!__PKg-%3¥rKDhO2$zM,G'J BmD M;׊e93_c:[5g7ٽEr=9wvqS.ea(/sg\#JTiRs?c9a,G-%퓜S7VGR5(\I¨TN.ݙz 5cj'9zĦ "cg %`7PE 9CG$$/i=_X?gƧ%oV:ӏDx9]2\Ύ$U`XQxJi飣‰`Vv 7֨k~L=Gu_0CVEnfo;a]drMyEmϑ` մ=JZڞ#|L%S]o9.CzgYssj4Gs/KE)9Q42`2p "f!NDi/[Oi/Z=m؏{ Y$s B<ÈV‰DVJh7sHh*jc\qpx!QȄQ]PS[6[#gG?RLqneJ-gnߺxpP[/_o?6"¥aY֫঳rzP4OA2H)K]@^I dE@v#13nx^7m{F`y@ϔL)sBjҔ[(U(Ѣ D "Kq'ITy 2cN3ºSHeբ.: 'Er>XE"1+Y5$,9Wښv$pA^zj"8H9>{/yXA9N>gUE/?7]~7<h]a?V>^*2 8g>)_{l{4 W6r5K*7}-dTw+jğ7GDOBa]7|/qVT$8Og|21?z8ѳeן;y MOr9/5=onq-зbŻ^^T+ml";ԐtrɨҦ7$DLj4)a4RI N >Mg]Ł)-~kI;7ksygH''J B`InT b */O/Su~*zrl0ͼUs_fsFAk`jd5e}`/k7|+\[a ?^1ln; (~dd~mgOS17\/+N3k37%p]"A%8IC;*r 'y)0FV#Hx!M2zޡ6h)F'4IKe@%,sa*J2`$V&cTkA&qy>TQk蓏"1R[l.8Pu_loBj_7rhXKΉ(]Զt e]"Oݽ &jRX,Q0QyZNZ$HA!=QsX8)f$z]І18W! 2=nGUU1k'F%6W$zPCm<9).-fp}IN/zv7 ?edP\{#K5W:XItMKA"ҐH$z&cAk/@Ŀh4%@R%,=.}!(փ1$˪8W $3⩣BS)sZ#qHN H>/= ; ھ v8}p;q̂)p7.Ly>АHkai)O%ĔN8S&&9u f4NL쪸VԜzŻFƺh>X H;Z/H/~x (fJ=P27u8Fp&QS'BcSֻvL\ןbq؈ TZ"^wWqXQ&gEou~w iy;0 2ퟛd_rJ'w?[M?X\y\yOzB.ى7{VɑR`(냁Jjq([@-Gda5[!cB韗ɱ +a' 7 !{:ػȚ\v9t>s˙&Wڙ,Z͂E)sz)9Rar(iz-P@ڮ2IZ^~x۫_l͹^峣M(3=<*^ "w-hr~&2.R+bg@\~d6~ncQDRREOԒMTsTj|gOm{ѸFH?פ^j:Rjsfȵ1O#Lߜ}SN4<\&^qO+~h .in]Ȳ6BOs*Ĕ5rja["gvP9 'g Q쁧1C0(")Rbg, )T))WK=l9Pcr&+wݸ=#r?^La4yKqKFWq,{|?qqY| Im^3?/0d"{?}Yų3*wasˉTKe_ϲwq^NWMQ#+}ŭw^lnk߃V7=b(vt2V O˗rg߻x^[,_әcWhDl灹 wv='aE)E`"J *β>U>yui Vm_OzE YךR=)zOis ! G@FZeo ŐW Х!!=$JYY43 4/ױp ^^5Nx`bs Qβe\RHGeX;?+|tF''e[oˍq8[2nsjӊfv uB^;E_W̃Fr6=z0Tl[4g,mI-6(C#ͤ3Z9!(]|;簘 I(f0#klDI/9W7qRh-6}bXTܧz2]8aYj05$YհMrWk[ }Eߴ0:!XyZ{ǯ>ߤmb~yL䌯Z{73?Wb[?;E d폡I!oj+EJLo1o&S0 %{^I|7H{QĀ`:~mwz ˥@s׳WD7N<|SFHw^X^S/>OeAIR(}=.Zx-}0׽b[^x//_ӛBٗ[El ϯwL\6Nӻbp\/TB.k;%O96JX׸Ŵ86ac^[*%# <.|q֞Lg>(+?Q7$"m"z BJdUUjt%~B,[ &S$يTB'm?l|m{Za1jwj@v}w9IU|Kwc_ӜXCQ+BFwfwO:eR;r;& ܶ{oLPnY-ު$G%QEv՚>j{N9r"ys$|>fy~l2W>GpF'R\%TdVh!5r"V<ѩƵ_ hBQ)BQkǨ!;VkwP߶p}z˪}.7%חa(%,ې Ap~,:r@f ?}:*:+ l$GŜaM2G+!ʧ;GXxu`9clEK0IQ,.I(N8m4#!A^IJgd{~y㋛A'k5\BI%( DF:XDsMpbXa#5q4yUZWTiub.,,vz]v+Yڝt {^>d,U(TN)3GHnQI [@DM]YhWvOS*EK)ad$Uqڙu]&[dބ[xy#vtpVz:ke-e`X)A}}tc%bD:?cgmI 9Zr2 3|UK^ =?A<ܲ5(A);_(xs"!)sc:3xjiMg&< 㔄h2> 4J" QDID=Ves" [1 >pawc۞YMY^G*#R"%1jTfi%1"iH&m  =P:]F:N"Ĺ j,J/ R)6 nkI?#6X[& @] e;``ւb@MڠEH1&EaiR ikc4A8^ 1Th=O(RJ :ǡLg_7οFz9R')&Ng;]o*gЉ}KNkm-njnWq W׸;(}rRaBY~{MHd~*y .=bq 7mVדe'/:D9t0 lbY.S -ۨɝI&-=MȷWӕ!v%NkA6l_>+O(^pهtF ''eeQ`4`Kئ; ̽vC.KA)ČFk;W'1lsݓyuoW  Y\[ʊ\:$UXN.GW0mb8S`q6{ 4W4qg8{|66ϲZ.wF7hU1 2{Lh Xt>ÈrM"9x]WÓ&긺kb_8B$rgp;7: m$~Ej̵Z" e6waks&J%` ! BE&Um<%QUm[z3g7:,% 0UY;ٜn7/\b_.|\G&bR9c6([8SshGO 0k׎|oT)ZTsۀ3Y4eB<2rV`5; +%{VR۰RGar/.= atԞ- ``Lb{0k4QqتW$IO3);~ٞyX`}A#OImIj`FIx*Em[b""2.8V9|H>y$)_wqu]ɬV\y6 Q7Eh64PΧxn^7|1XUPXAdP 5 ˿k 2 Y/QFWܑx~> VXﻦ>-Y‘mh@'r7/[f^UA0Ԝ6dJ4^6YBi j eT8`F:1)ق39j7ZI'O2@ R9]4PtB !Dty|yΡ}؇ټѹyMz28k}ooKc[?cޑ[Ƨ9M_,T`8vZZW&$R Lldk[3f+d+=&h?z-]rA+HTKenf?fqOh'Glec H )q9@_J. P+ynb' cƞOJV6uB v(FsxOg7b8}AδPԦQVWYPl 3+|%0qZuFǨ$W\-FN| <.P[QgXXPh8ZJI"Ql'ag֨_DJ?DD-ED-DqkWל&hM`*A! Ŝg⹐EvJ rV0"mb+Bv[P "m hI:U*΄"BfAcdKZI5cDL݈<f1LKE1.\ܦGT@h[ Va0"B1 F(i/Ep9p/xؙvcupgfLVA;@M2VuÊWأW؋lctF졤3K(*'BƂ9RN@0!*쟠(C^^nǬb1b\ Bnm6s[n>t#޲'jn.g7|v{f~qg;9gGCڋi ܎mȷ'|a1ʙa:loyt{6={űݙw hٖ,HYxmvTKk HfOFQ(y&^ֶ'm)ɘ()gDadyaNJv;;g|EE}Z Ic$) fj"6,]%T[OKSdQ@nK(Hf}3ŠuQ%Uц8nJ|JEL 2!fSEw0䋯h(,!tC]֩_j,ac QbQJ  H*tB*'_L2;'P0c̵Wfo۪-Πxwt-_>x"ݽ:?>wzhk6 ?HE/GaW{=B{zn<&H l X@1) >0* y&B*0 rB0ccN4([jFAeYbbj()=&'h:g?g+ׅofQ+q7\F/sVP_40폗Ae֚P¹EYm)S&@s`EvV&!RIvIj RZ"eL&zTB`TX)e՞'Ujb{Y?E)pG>Էh×^e8|[gjf/~r6,<+NC^PίoJ5Ad`qѪj ϣF}lq;.y[|ۛ)-<||藩Mc3)elҭ_..p֙F/6_QvˍxK ongh:+vc9ƍ*B"$F2 e!,:d(YIMdtO1dSǦw:fJ̼I 6uW3ejyZEnu};5Wz1SJmv4m~-wlzwxm?gUgk[n>HpB/dl_Gt2S[No3Dnj%d6oYFeAYJ,mYYT$TcUqd<-,ǪRWi1B#X,Yn'R]6"uhmȧ(AK faUo} F`4 `#޳و`04@=ӺRޭ:slb@3n7(,6J\B9)Iz ZvP!;{h+t!+x9_qA%- Sl$(Q Iie%hSH2UPj5^"i U }TWVLݕn)C{rJv8eݳr5ݽۖɟ^lL}Ӽ\Y4O2`22f@!nB&b*N D&o7w~{-\nگ@;;,y͑`:ozx yewe|ׯ\OtoF ۩lV˜% TQ )mP9~R2F"lg3ǫy _}h /Kb, SD6_qٙ`qγu>9RR?y2BZSI#ZEN됀|YB` # i|py95O0#2`< HI-#G8'!g7ֲ3f|Ty6BπJV*K)4JX>.@ptNQ7K/U[CH&OWe* G#oT<5\i߸n4KAn(C"j&i~W8alMPa!p jII\7%1VwlΗ[g7+5,Rw5IM bgoV݋=И!(jL͹w͊>M 0?4`=43I_?0@M{VTΠNxbWlʪd:MMUv(s|" E33SgÜn^xnRf;G*F'R\%TdVh!5Z ]rC WYluaiQ^VDZ OѐI5IL ƳGխ* 6|;"^1a9Fĕ^q=)S NSAl;ϣ+)V^@2LF~kM[W6ڕ=t+rToqх)2ՖJmeڠ4mǪ)*mmfRoL'ʄ5ceB7n|40E%-c.YhgȢ35$y҈vɎf>FlOY;~pp<`mRM ^zEx4'LPc6KL 灥 }~{ճ#?.j \*VGPt DsMpbXa#5qt4֨hT'ުuv*;۩v}"UbYnO~oGKp%t؟ə#$(\E$% }Yh_C2UQ E,Rdd<^~\0Y׭ڹ"4ѡ l > ;oĖΊ^~~߮{L,c6!XطSj FMsFV;||Zpڿ1QӬ?k.!^h;L0Qj݄}0?5`0B3'r27DC#_ؽƜޞ&}l]ESdQke|+Xtf]n2NI-EI^%(T'p ZZ|chXsf5K4G|:R")9IfQ* J3K`@{tr9=|y0phvĥ U&$D^mhku4^U]ϲ A5zr W:D)a ?!q}[*ސ%T `oHEpStt]ItS QKϪSn2V"?zx!b`PΖ$yR٠`^r!SdNΥSwb2\I9e$ tܛ NMX`EX.*&#DgkȾa_Ťb ,fj0Wg%jZTbie{\<׷h0xX7LdY5`,Y~gutk6Wu3 K3MW3 }rLgG3畧F9^ZVYE{Es}hD#iX煈*p.2a`:GN:4f|>D:*)Zj*wA D87V `2mςix*51vvT}qӈo1Xzj– &@-n{ˍ1YvTՀ{+%,'_%Y#<r-e߼+Y:)t|o}zVN 0DS)ֿ;i.$['O:#5yq*hʷWhm TD0C|~/߿IUp=\q|߼ArK4*AT &n“4TY(PRu4^Gș8q#GM\B??36jM{tet1tc}y6,S*ܛT,jeKշ_T *;5w ӉFmj\2؃d/_j'W}d"d57B?l??ܚz']`x71<]ҟu^67nݣӣ8j༟LJn.(D$њ~w&9wz?eZZZSW+Z9jIziWF'̡jpjC_d~~]+C?#o|à ':â$f9\p6Lލxi,b%h2hTl %m+"R0ttΘ 9Whs~>-DRix`,(3 0V0t.mEasNR;ĬWB"A29 4T;^sXe[iF+StH'q6:6ԖNRGƚ>ܜ+6)7vMRBXPZ84:fs|Q$}*g;܇ S"w&{I* b@aaY:b飲yaVEKOQxaR.Lc1>6)v0d m(&T*8⢕,"L@zRRdd,e=}.ꁬ@[ A;܁RjE.`jN^IXdҾBж^qC, $ќ=Fn7Hw"Hh $ ` P* +zcJǘmc 6E|n' }:?4D~exBU#C|Un)ovoIupo8ERz+U)DPδbLa,0xD"!v4*k4 <4 ;6`#*$;A Utȴ?9wKJ'p8ٽg+_]0ɦ,We,',5?@0CT40n̬vyJܦHE1FVT⥒Xլ,Kd̈2޲ C0!B*Caj3jX$"﵌FMЖͭlZK- fCaqĴ;SC^Z<%F8†95xDnj%1 Y+ m6Ͱ 9N`.0 F" T)VJ\GG6gYyVDaؔK"OMY7vEJcz)]M\w}<}-o2{dP[ l"6HJǤXxޕeBcm"F)eD*h]t>JwQ{8QyǖF#|(GkQ:lH(#'8>jK ^Ȍ j0D 18[lhWi ӌ}P5̅gTuT=,_ }&Gb362¹2 i -i$Ƅ`#Ly,CLa.0f[A3`xDTH'XH&2BlAst!"w͌m'c_ 1ڍiǾfmݳvxQXp4FAX4-nq`e0?8q5ʇ i "d 1( k<$8!]{si^|XilC"0bS'#R,#R3bψ4B&J <1EyL` I0R p4Q0kvZB+@ FBb%#1)(4( &LR#1sl18eE06:Ӓ}y4̋Şb$9WFhuP k8(lCpG$އiǾ|HC>V=,ZU9Fnvb˫e"͋ڇY-xmV?~WMl@A3`չ訧N~cͷmSoLZvhĚh# Z5M,{B$Ek(f~c-ݻ/뺖2f7^ۋngZ1qCyN>5Z/:IS|ǯ9.JU92냌ЮoR ̡2u{OM[\+^N-߁Smt~!s!|Fӛ|QkΫZˢ۞e7e^tŌ~AB2rc`xl^\ rVC9Geѭ:rɵ$ y$?S^_! ]n/"j^DԎV((l*VssjJCܥSmQyc4IGCOb7‭jU"bK(3ø)aɭlܦ {_M_>ΪObuwNbtSnԻMzzہgjt S걃~s+J: E'ӒGeW,ݺj$xu LxL϶l> ~k\ `~zPlu=C~޳j-;oÛ^16+#H5=ޤ[6.*7lԃlb/yxOoICS,ڔRw[=ۧ"=ƥ&YU{ N`+2*ʠkQi'TVZ^CԿ3,ƾtVll[Fl< l_&zN%H`&D_0  %ُqT5;}ˡeak55,(jZV[dveczxqEb"ێlL7:n[O{^KGa`EpgLpQ m? 3T? TZhAMa8hy q.{̵!,Ҩ~1,pR2)65q eQE!5!X(-yF:H*0֢Ctä$~slϴM{43A3)\o3R_N>rm AmiU?j0[Z2Eo9l )˳M.6to~tJ65^ -@nhLhǠIQN)1Z@d;rw@B_&FZۼhmJq芳x^ ?%%Z9E>ג)E 0Gp/-./LB Ƙc|6Zm=;쑗?i&xX5\gRrRuՍ;h'xڔjΟ)tFY2ed۵,hV*FDCQVsPҍn^M5ѤwRe(5NDV&LDP &$L&>l"]lo.D!Tyª3< .Q]Ihm􆿝2KWvCIxJ@WIIu[7ᜠ RLO %ɣvDgdtswn*xVN?2F~N%[ddrJ"s0~՟7`u$Np5]0%m%Fo ׯrj;]% dm=]]Q]`!ig Jՙ Ufwut8QKtN08EwZNWc%vk4GWHUB'?޻:J, zuStpuwrW l=]%tut%Z2!$mw&w J{J %]DP+˘ ]Zt*c+-K8+; m'UB[]%_Е޲)b_ЂdX| GkPoq-h;ՋԮn,;UXe~r}Y1^,w炇:Yt~zVFuިFI:^4\PWmRm!}$oc)Q Vݡ+,U}sZ ВnܧRtutEƈu`;CW d]+@Kk;]%bw`;CW ]VJ-z:BbJb:DW Jp}nhf7~'w_o4^k'YשAY'YG_7$y.cHC)J`۷|}bo=Y#qF ~.pVᶏSݏ8Rhk{-Ϊ/USO+Ҫ˥B@i5p?JӔ}\ɝ\pLJf/H?0B3'r270i LV~>^-V{ 7evVϒ<:2yRS$_7=,7LO-_-t&zە7bF SZEN)F3Ak4A|DsˣEj/T@^V6_">|:i"}.HX'@_؄ A2z ViǬQFYaj< VIJj?*nfgKA!`W>AKI;˯^|gdt}]Wf~swKH.!ZU-oqDfџl48{[nI40D/9qJm)`FI6:֔9)@gPJÍr!zP /:|uQ.<|.b!Yξ|of %*ƅCr>3I1'|Vqh~(H-_0Ќf_7_jZp2R~1(bw$ ENp/mD'tN/Swj"\wumHa_G,]`fydodٱ8 p[-9Lrj*v_'S^wkъmGImm"`Fp׋k,]Jҿ~/DJχ7x^T Rѷ-w[x:T]ߣ\kkyh\j{˦A"d, 85U#fgs:|^uGnIG(`tvv.)Ǎ7׈X4#J2![ЧoVܖI`ҮчH혡'pP( y~>JQYu:6ݼË 7lStm"ƠEw1A\, ‚~6xmɁ3{Bm2F+(ZCy!jٖ,"HYxYvHG=6ŋI0j E__(| j|e") Ie50S1^l&n⹽]C݌=I&({CƘ|Ί锱u4T3PD 5 PCN*PCS))Sl6<_!όdD E$+))<\=;Ks B}FؔE`bXBQET{@,*t%UP I7 5_RʃPT \)9QB\IfN@N*4&0lI<0Nht9 5/+t zH6] ܁[:kL*z3KUiR.*L~' `݋.6w>f=yqc hi#LadMQ@gg ub KUwEB2Ɗ:x6ٓPn5'M۶^};9%$OXl>1?d@zF6jK 4Nph2Fk%b -o W iy)堒L"OfF# sSґ=QV|ūqC"njVvXv{9˰EZt lZzTY˚>DŅL*(2T#u7]!*1ZlvTlzr„QQ1&,c%RTkL=affj ->HT6Tzf{wI]nGWOni &* ÖBJe\1)@GɅf۔.߆J)cTZUKj"S\an'>3i.)m͘(Xfq`j"bY.Y'BbrZu-NIIllpArY4R4XR~2#C ̢Ύm Ve2pF"CsdZ 7g=I}Mr\ XVqEԢEbEz$:5DXJb繐EvJ `䵑 XSm R/j9PڙH1!dT:1%f NB춈ud8|ʹPE9.n-S*%h0X Vg 0P #J&J:Hp&&Qx x*vjuc{G0aF:߯;ob-AaApC+E?Q9˥n7g٬fMw!;2V4Yj8 ys'7gj Jh*ʤcl,(Zer2F1h :gșkN,,tuz u;(V&7C+t]^?T*| u?ں=dGmU~7e-v{E@yv}o[\ݞ8}t~9|G̭\#G4-Ts1E}bvqXc\|gܐ*gn7%nnyWbuv>T ]Q_FԂҭEM^t 0t9pHrItrri i2=Xz|:)=}&v?'Ϟ7 y U?UYճhhEڏZr$f]AR[Rr:hAcU\+AcUZO/Uj@t|:v|)}(оۏk )I pq4{cqyCT4 iAJHH9v߯z"%3_3OwW=]SI3j0F& X(ztw,{Sşm9a=ܱb7طs$F$o$5g<! B_kH,3'zЖiA[{+) 'K+_m]TU@1ExNB##5ցeA*D9(-{D#;}Zd*Xk镉^l-hqe&57sC8/N/FnyC?n| WRݏ׮ɍoKFI@:-n`K4]vNU`<'w5}Y^w?29J4Q tqa'"mi+|N߼%Z|-o}(ۡ)vx>aPwncpg߿_q3/O:_PHT~i3I[QVYJ -\"]};tyeSuo-PE=f\X 2+O"imp58+AP3{4?qUOߏ7% P*awܯs*Ao+U nK N_6\ݿRo8ftJ߻Qe*7; - 77&n!+G.] ͟ '\6TX'p}#n yWpRlڏ:'g'~x$gYoy"+YqmQKlav˿L&?ܠn}N- Ka:hս|aKr3Z\w[\z{Ӓ  _Ҳ8ުۖک%6q7]^,v, yXouem Gcۙ9tgf3-f 6e{<.tYʵS*(aT(WVi Z%c6z"+Cf vXj(5+"Tcye+>G}iF#y̎xr,@K-ą!;ҫ|џuw+6ٸ ;oeo&3?VDrpezsUk*|(I]&22ۊȍS_ lR3[Cvtт!CK:)t?{ ˎ~RqѼqQ8кr cQY $;;)`d9(]nK6pqUاk->PcLӁ4i rWb̨<2 gEdN[ij o]X]Ҫ6kUݓ]~~Kw@`5k \Rn5֕6`/}Uހ ?&] ׊#՞h%Z ]퍒5VA+]vtu ڿan>R}rܯŠvÚ,E*\L#{r x?Ƿge{/h}>S=d@ٹBsƯ2Ff>UJ{37~yA/x3$ &柽8{>5:{({@Э]LGRq)RWP8^6g/-ìji/UՓC&"(,7pXC}5ɥ,>1EvBn(pUk솂b EtEU6? ZcNW 銣B)ZDWl ](BW-MRNZ4EtE+zEWĶUA ;]]Iƌj3/lE{WڶUAu骠4+Ŵ5"R̀jV{[CWV)ҕDA=[.cmVB骠::2\qѦ@, \֚@tUPn]s,| 3dV_}_~H/DBEE]UyUOKܵg5 ˺V*%$n}Uum݄3B*^u/7ua8l; 1>|cKL4PUQ!@eomԚE o%sT"^L^$"CiQWf1fM:Ɋf$ `VR$ J8";!\v,m`D:f'-jd2rĕ rI`@nB.ne hKdE|d %|4'@{4 L&H&_6cɂ31GV kd#YBdvY.t1 ZT#.~HAd2&7¸u(C,'Ce KIVf*޵qdٿBmRE2 b%`P]UIHۤ(Qf,vx#DfTխsϭӝȲE!wEڶvNCn9+({ D5ӝ̉ɾL6#%8TR 3eq>J@ZAA7uL:vA@gGJ-)!LEwV(J892rVj g (Ez@!}EAꕶ}uZbEJ]VkD5ڽWQA}Ysqߵ`3իj !ѿ5yM}e*-G]e 6A j"bH v/UTz2V"tVk4iAw _env݋-RiŬ3ICQŘ@QE&i$% !pB6}fC< غ3B'WElhYIcZp :JƬVwڦ G1^*76B7# Ji=$]Nt!iZ.TUFL1L: !'`Ge %tpΠZbM=A!%DB>nobuԆHTPw}؆`8BulSPAᡔk-m>%TB9 Aڱ , " P6BnkHhmBV55^ PТ tdiB\"(^Ib$#EPں$?<Bjw7qVp(yXT*UN$E8*g@ A1֗9mZmW /hV ^{YI$Q5$YN$egQڀJM).Q{mDu,e4{-w+$a:J` تQ}Awߦ%L0AlRjh\,)G;5ns:<7Xx&Mv>MsU&#ފA[. lgNN44z:63 [ ΝDn-,Fݚk)DP'%. Ę{r6IѰ-*3bT,2j7LJȀ5Cےb[Qer#bsK7{r9U32TAEAj0F"K[Ƞ r ==`-f=*-P >c "($'M2X:iP'W WB`~E3`X0j`RKW((ƈM[EEH̦nzd+~x.$mDr1(j-j hNmU;3fn+vFW -jf=X(] %TT/Q1WT ZvuZ=DTJBT|›-ւ L+[^qE Do+ (|+E8P.(-l5EP+B~BS[0ZJFDStA8 8%mCiKEQ[1xx@A4Dk̦ڬUmg "&b96#;ƒf5$=3 T$-8DԅK;)JN-, 弭T'3tuE#BPޙ#'gAA(EO^v6[,{àc6 xcM4 Lv&)r]Q}jGlYjLi蚌Q0,"?}/0zM{qRR,7؍f9_]BVsLhGڞfoF&\ hƥNǓ`@7488bq1?;=ʟȳ|J m+]-bNgZL曧{W/0>X6_nX‡жխ{>]ƾŨxHLCG(R練Rn8w! FCwJ tN 6b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vHL!98@ h? Pnc@׼d `CWF]Z3BV I]޸Uz(tEhϴ>¸07~|]TFZ'}CEyv{3,5WrY^00G)&(͕Ve~q^k~?Y7(I߱8To48D18.\j6IW}F|Dnkb*]Cnhm Ff6L_1~cq fuq9$P״xcv޿jA]'_:y%E8JJ5 ǓcYHz%P$Q &z(ЪϟrcOm0C+EWW FZLWGHWG?d'FءPZtute]p) C+Bkšt/Őa0tEp`F}tLWCW. ̀]0bhѤq [/Zý'-wBq)Rf.k9RcN ='_BkT }],ק8 D%?g}js:b2<ۻz9>n>X6_C)+%Z~b9-[:]zwײ}&u8Dhy)Y_!}"0C WVC_V/>^**>h~8]M$ ?/鎾QAOۤ>X(l 5kK9hĢA'nc1bNw%(P]*ŶxULZ0FXQRx:uj2s\_׍*xRzoh2,PFtq(#'ZXd4ٿȨ{~96o\pk! e oBOEp[~G, I8Q[mI}t2w}X>՗-0V#<;nm{xg4RO +~Fz~7~BáWJcW8 R*y=Ue:?lNo.SQbƨbi-A켣|q9k뺠Z .e{){쮚DUfsb*lj4d\hѢUV(`LɎgc{؁x^gT ,UrF[g$^ʔ߂Of2(*9(ExD7G!5 Q1s&`,2Ϭ&vn~~6_,y;7oKާ9'MUcTU]sZ$竏WUKU`M'$]/|,_Jce8!6JdDP 606KI;n99M%lĒ $-3+ Q2LN,*3gn1h輫;e0ۗ$MQyYƔc gdN  ^wVg;Ľ^LFC^#b\,, [WE[]^np8NQEX\m&]aܣ0y E) KB]DШ`\g j +D m- P %I1i %)u2$SI 󌞡+ٮ+k(1i5E;c@EJ+nJM<"8CTɗMIqOdAJESlWf(},!tC]&,g@b]MQD'ʹL9l!{0d܆V:.)֫vb4%{_KLjlksN1SI^ 9@@V)ĴD'KMLҴg~kߴ߯3Bֱ ݁dg5x}uٯ5m7 44*ԸH)V)d_~W@ѓ|2 %K]m3>84c6IY "h-UĹ2:QKJG@Xc"DMaJXEBc."aD&sQJ@;$jQ>+{L#@h:pw(VU'ħ>lCvНp9?[ּvone  -ˤYwVȄ;)DF:B K ݔ%!sL[Jhm b,)G%gq.M1PGNpͺ9ɬhP04*Vxl;|'235`jd+&池vٱ/jʨ{/Md'Y$qPIʲ[  & HgU0kcDWiBY %AԈ$"QA5 l'\Mx%jx:,y b'"`[AH;gAeRN0S9ڇIKVpaD\P("ݖ3F"ԫ$X.* DT"A%-r,[j;K#|fɾ(*"qōX!wSV4S&L3t,'$$WVG`Caٱ/x(@ME5gtޮ[Ob)AaAp}#E?:!=p3}^#oDj& FF9hoP)3HԚ>̱ϛ0oNk!=UD 9fPƈI%LB/K}\ArafK]> w-]w]BnmV+PK9AC.oԺ͢i8dz-޴ldqhRˆZ.{wix?λcvyPR9w;QϮ-w[>?s7_6ݵRn\|7ն\|zqlsٹmxE_ПƵ|vUR ʹ QCx^e KF6%ͤ D80Ǖ(h3 D>zGy|(}wݕ  mHQmn ]J|}]ξr纄+e+!KubFM902)5L 00OXt'm6dt~Sϓ^9y)Ny}Noa5_oO~^3>}=O5B9Oiiw`kRӛ9 #i>=1YiސN 8!Xn:X*HRD{Wo·D(ѻ@ҭ巆EWoۓGI 8Z}|NW^҄Z|n}}L0eobB-cwzqոI},>%vBo&t^vT+QL(>ĥ}E z:-td[&h)m/5 3By$i8z'%>aFc#h՘~9rZBt<.CYBY&7 70[ =Y(Akcp9K j't=ZxpEu/Y r c7,G.k] fk Kw oz}B.2*Qjo40n3?0iԿ&NjL$Hm&9@ݝʌXNݧB1F2.v P1L0;drWr>"Nq8\"7e`sI8{ʒfFفEL-w[W]9.'͊@]e,SB N98;ttFmpn-ITLQ1ggϝnCQs{ft#\9ɾyy ~{H ud` 6U4XϴFcquXd; G{X8goc-Vak> 'f$=N38:-19/y($eVK IuƤ=U2!kfi~yq4!k͍~M]^xXu̹vsGRɁԟgB?V&dBw`, VgH0byw^/t>^\Ro4}=y_$c }/d9E+a$+wilHKqnĿM}vH}vҥm,}PP0"!]OCw׷],t|3,'K=]ae ņs^/>j]K4O+or,_`Q-3g = ^~5Xx*.^  GMVJ<l\O.ttсx CnvAQ*(͇ͬm *Yn(M LSA#?][1F@E~q:qCQpDȢ@(22se:)A*32Oe>D**߾l㻋ՓN;W#ɐ='}@&`X †pREm"X/@>Z4u-} VUo{6-\mK8ԁw\2JL /]΅FMd6.[e%-R=wJnhnLU5WGFƓEe?vr^˒ѵ݊+FY׵bkc.FL6'nK0Fb" d:/})Ŕ 9&J!81G'>|ޘQl:mo< #4[wcbz7 lf֦6ZeJjӋQ`ؐm9g|{D Z]w WG=hHwVUozY_7ts_ϯ s6oӚӲ}6OΠ?j=8[7IHA7QX)7]ulm._'o!(t1)P5y"ָZNKC.Ese(Rl1=*I0ͮ WxS,{,̼=K?N^+^9ۃؙs€$5HhY H'Mrˆ*]yoa^ǫ :@or-ob75(q6DZ:W,y?5 ZIE&4pA9d7v>.W4rkxx>Ot'.8&aLFK=x Nt2IxM#9lsB7 Ob5[;i-.^#}<@nZT:&QB!JeaCHHBOmR!=iaRA[voxTJZ`ƶ5tPK>9{8ݥy|o_ύ4 a(#g퇎 4|U`"tI۴Їy{k$ثϗ V7]y4? Ž˯w E,8|w0f硋et)i?YF9jc@ЩY']ĥIry1但ƿ}"[6|?"{{dznQN~gfOW{~o| 2'tk9G|b1ͩ]!I>Y8]=9o.yj?pr_^P>t13{ D8H݄R:QX,҇xhqMG7HH;ֶ֓BYѸV8L!LeeJN`Ҿ]_/7lїaU G o&{rU+>_?⨕Z.7> 641ah Rwbv'6t7WU/O5]gHy@.!L=~s< ]6= %tDp כPLһ8\u>/67pzs4HdA@`pF ̃9U6`)sF:4 "`/\1rŸJ"WL :wbJS*+Hأ\1\1C+2=3<"b\J+\1!*Ժ`.F.EV)uQN$W l]1.RhQ)e^\G=(?}jFXFႀij(>ʎ+[j7Z[ݨجƾ-Yr9Q1m,l*P0'FPto]]\/jc)N=F`w?oΗ s-¼+uZS;"YV b(vL]zSysܯ#&_7g@_ jҡNeX;MHXWfg۪ƨV5)J./NXƦE J5Q$`MTR{EA~? = io`Jp~*HJJ+u\1WUP4)FE7?I]r9mÑ+$WL#Wk)U:@2F*)d`[Nq,EȚ:D,ɻb+b1rEV)7VV:rMQ[QwŸbyrŔ(W4L'rn9Z}$Sʚj?DX! +u)rE.bʍF=rk[TN9\(#ۢ.W훧 n䞧:U'c,XhwŔmrYEʠ# -H6ݗ\1/ƻ"Z\1*W(W\dArERjg\J+5>wbJWC+h "`( *[\1-fd)}M\wKX)FSs+>rHXFᚩDѪqY GVy *?I$JQru8re +vC)rE 72+ԲʕU4LAr+b*%D+5wbJ\\Y/c9 zW\- 2FQyÑ+N_=)_\1.RimOwurjQ+r УӪ 2%Vȕ|#!$qvjjehDmGRJ+Yj763[]"fR=1ps[8oɀuI!ƙݸdhz-Bg: -Pc 7SbZ0^)RXP$GZb"95Hi*E RH#9ōdArN#WP\Q!]2*W/"Wٻ6+Wa`r=n|,,68kCD9 T5[c D6OU:bni+b6Zy(5#]$+T v͆:\BW@{&>:J阮p7UCWs+%:JkWHWV݌*! ]un6`GPJALWGHWN)c椮 9pl;Z}tQ ]yeUl v.t^R{Gx*(/ݜ`%CW2s~4vCWCNV:|g ?0}_yyh֭@]gЕb7N[y Igt8U.(gϓzFAM'C ?aM'<n`a6å0p躡nX7nPKgDWW߯,JʹUGUGItutIӪvφQWNWeUc+"҂fDWn*pͅ:]ud Y#6);\kBW?x(a:B3+rp*plh:t(1]#]Y+쀿f vJIcXO;#oSn #J11.zZ;UZDӆ}7Wq_\<˗/wA;'wz5bPۃ~?#k;B鯼ã12 ?jqӪkD@J/Em8^BjyY_#X:doд!Z co ׊W1oތ\]=~Fw SzM>OnEf͞,kH w񑬺Ľx Tiay@}** w(fO薑yxB?o>?_k13#y q~K7}g@o/jN v**cRQo>GƷ*%YT6Z*2[Ӭf5-]_ )Gav$Z^Mmcf`RKѪe"fK[rMNVym)ddF4A o7\t}_ *DW-JHN eZXr2ʸRQh4,?I?(F)>ٟQ6B6Q"`U8GH%i6GBQʓ5֊U^e`t*$Q (5[t &YE:M 伾֌%)m]3BJᨨf* Kb9Xb1Lf cX@4Cc6m@Uk%AёLYTZN1 늳πHDK3R&:$%LE:݂6LsGMJ#TAtLa?"ŧ4zr@cVKk޵,lM~%|m=<TH }Gz]FPDҾUMF-[@!!X2hOBU |\&9 AY55ks~;T,mtY kAޙπXc%swc5H؉2VM`rsi_!GR:b@Zq m@ii fJ¦ VCJq VhrЗ Teʦ JFb>I+y1뜵'Q\2I&l.-6JȲ{tb`䒳"ac+ ])(J͕Rl SI*e»P4.CZ=w)K.! ݗTd2jnuě!q2pt/=UTBNU~d}%*U U;#*)@VT%gB]V3!Hb a~|׫By's8->J}}6B@Fx4= Rob:EjC$*rPK\Ha(#TK7(},C)9i0 6#*ZN:G!v, :5)ɮ$1EfRh"V A+]KdP0%@'(YEƮ8XdЙ$ fJ$) !P?A.jDQUPբ`Qy(? >&@JY"RȡhjͥV }tҞEw֮ipI+ j@ec޲6R*4"*x/TNQ&-w+$a:H` Q}AieTL/L&4aO^yk7#X2״7˺\+hKqfQ:Z 6Q\Cؘ5E{hPp(r2HuTtkf" UyHY-:h4v Ƥ*7a[N*3B*j7LJȀ5ɡM!dsC<؇ܠvzI,c5:#SA )aI吐 HOOW,xA7mŰll+O3ۅ+ IJ*UJ:*HjG 9)E^2`X0jaO1o(eQFRSdPGb$djbb,RuJ` t@ Qti#ʅLt:ƀTm:c&AJiPcM B%TgR &C@0 >Y tStrgӮAVA%ؿx3P[4z!@6(=<V@P0+EXQI&EzʨBb4ASA8ANZ{*.(=GPRₑ$Uێ׈4JCqxo1KLYRaԇ sȎI8ЬMrAH8DqBN 踒#f !Kk\F9oj#BRvE#BPQցR `j?z_il{àc.# ew/g B^m^ Ӌ~DO6KYZR~hˌQ0ʎKo߾}y ,^8˭n[s9[6iZzŶIw7,KܜoޜOnw?9L6/rYh~}[vs)n6{]ۛ7<[hD8Ұus%>nnmzY {?&66/V}sk^_f4}oGdݏ5nק\^ngw{q}}ow:o.r[ysnl}wϛSn| 7|yMOD?闽&F;],پ{-k,>hɋ~d{!YMںDuP @t>&w/ Qtm 2Q4d7OZsMxuozhͨw]\_]#f?V7ߜ.n67NS7?\1нdj÷7Nŗ tV>Ȟs||@ .xC1 ?YA}d$ >HA}d$ >HA}d$ >HA}d$ >HA}d$ >HA}d$ 鄓~N`l@;ZwOD(=?(@JJ N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'2!sr)㵙phm8x'PzNctp`'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vx@G`qq3땆@2N v@b'; N v@b';kmF`wi0-b$ndIQ"[R[m;vj]$*X}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P GH'G ;spI.~P?\_Byy~& A 86RH;K . \ij}HF"pЧ] I;W 3pĕ+pJzJRb++-b]bWI`vN[ΰ$-m{Ϯ Er3v{Irac R:.σ&(bZU܌'|g6YB t֏et/NiOFa&PGY92-c1Qа4LD;+xyqAuB7p:*bUjIa9}L5\h/9vf1|%gSt:I{gP @;Ac%"Z`jHQF)3ZdPY#ɓF~d#֐(X Ef1cr҆#VuIBے;LpoWt\ä-`ޒ;v10";WI`:WI\W8Eзp H%4\RUWHi;\%)Ut:@`FWI\FWIZzvWWLq*d t3$-m$%k+T]rv՞e+p7]Fk;WBeg*QW*I UR5•Ԝ.t@\:JҒֳ$%=\BRZ E:W*]՝ I\UVw]"b{=DSX#qf:L\| IKWItW:oS0IїKD0T1v %sݏ@*wa>< {M?2'-е>䈨}h^(261GHfQ2].c0F% I_p$-i=oHRRW\,8 \%q \%ih;\mp=\\QUw+ĕ+pլpR{e^#\1L+Al;\%qîZEHW8Xw @`Qg*+hW*Ih*IeW(Wd&7i Qx?Y_'o.>|x5N‡ٗ[~wAHB13)2+<FG=e>OPi*H-&]Edk?w&O]lv}DXUPܾũ7kLp^ӔgۧA[y: p6/|JPl.4:vLN(q۴AT@zzsv[37}wdne:h|ewOa)uC~ԜZEk%L*SeQln?΋ 5,ٓQ˹ xZ(òWc/|<dt]ױ%}1=cwTSgqoJnV|5RcaYkrD\b|UYmTuOoAQWp)0۝/. B5etׇF̿|^yVc;b Mnb.>{x2NiAј6@99-2͘A6ƒ EmCr{3iDNnY U*$[ A))pDNP$:FxcjѪM,͜ηSO6,m7 ۛߣmm:_{s&#G}p2'9DφNaC|#I)vJ^ӊ:, >緃qHUp=eb[>3ZIq| *f`k.gh՞s:FY^1Tk%i&RT=c`1g]LeņY[6Զ`1m&iϷaE- V(׬L11.)X.GNT饕cgS0Y:Bg(DPX)got&H ̀]QQ7_> $r)DmL5 -0,X@_O-uv3wzbDM z9pd J>⠴Ꮻ)|+[+Z]2PP"4.: (|&=4"XEFd>Z[(#'8>jK ^Ȍ΅5sf"Tؘ896bj UϊKR_|X3?m6(ěFϣa>#62¹2 i 4cerA>8g7FS͆>$`l "6G""E"bU_="nx +%LHADa.$HKi1`Y춄V!l: Ą3[XAA94aKFٍo \̳jr٘a\=.MqV\CdA1 9+Q(D$ۂiDZxHCr>-oY_W5b 6xzc!=t+3~\t1}n9cmBGN=o|˳J?y`AΦrfAc0kO`\A/%?O LCXӵgn'> #UZXWӛIyȘSyUxwe{\+]fɖOڼu*QqIpkN]hW jLK:ֿ͌8 _Q <7PNn!;";5I4 %FOc XJ1o:|gH|3)DQƐ2d!/$VgŨl.ne E5v\obGFA`$V .sGSVS/Qyc4!ڧaō8`aA`a[0 ^韦pM72yz~7]ةvުnªۭfukֵl--XhfYUՆϳ;JPyu>c$:f3|Q$ؖ|NIPˏ ޺lGEَDC$c9wF`I\t3Z#5TRc)bAĀ,8,uGBUxmKO4^JLc1>" 'a٘8{ܰ1ӭChEL1<9I"%mڰYlQWg}B[y {5XY8/OW[ЁR)PLpe1D9F$$Ji6."/h"xS :᭶&H\Ҟ-QR%(TC #a ^^d_;Zj4XoSTֲ|&RdtoC^ p{~r6ozs 䴋8- l"Ƅ9ɕIP0\QyƆTP@֨|Q-i[sCbR2 ]>7%8ſ3&HЁ}hv GT>G6/ny{~|p6nk6i=2b1iR~[\[1 PoucUʡ[[ n\O'ۃxnV.RNA3D`i40"RDS2_94ۤ(!pa55 F&@p1xFzڰ1rvrN(Ck]9]o(B2.f}Hm %+r- |jڄ~*Cm U2qt2(f@$` ?K'H@.&jʉETCs)6#5z{ ;gӔAD.yU @7K(vɼYo[yEr4xrQȹtr݄ȵ \[hmFжgӈ$}pI>wa#[M[wT'e)rE ٤h53 BaHsGV)TޱL>=w? ?qb>.܉=?쵳WڡgL\\Ft,}9b0!ƙ4FMDВŘ jh5U ϠSE+S^YR`(2BE-< Dž)196(Fi!is@sw~DyG<\a3Ey}q  %NJP7Wm+1Hsκ\ظ"eiF ӏUOy^62%ko%aZ<#d"S)]3ÍVƑ=JQ6( 7*P{PG+IA;9/uZlmG!&PԴes8k= &xz)B݉q"r{nTr'Taª] \Rcܞ.rG)JY!M-'o^7zd?nW[\ٯz*mifM;-^y U};Z~?շ>k.7sW' x}O-ޣ.K y (/^<)iƊU_3)9{{_"L/k]v~D"FHh1ԫQwq k2,g?Ŏ (e]I.䝡[_G;8 q8=0q7 e);9t>_H^FT1?Ke[MHmZm]Mb7-&hm'w+o߶[ r o +m 6 >}l!4kYEjk`ܓH8T0%\ D]3l|}2n$MQ&@cv/Ht@:_#۪u_/P\^왦Y 3]IM.ntLRo{VtmՕ{.pLNxf7ؤeM羽ٲlof( |& E3SeÜݿlS=CU IiʕKRKF*)O&8aD!ZyGi=,8'ˋjX@*^vE1c¥rK 'A E=gI40 NFjmP})D.4i^#J/sO$NF~Moue7Klڕt'rTc6 ^WrMuC)D'ϔ Mz ]>Pe Ey?C~^*}Ě~a~W(_ooEf4D1.\ȕ_G&vvN8* ޏS2 )w)lP[9$׿鯯U䀘:U(d ~]&k$ylyy,'3=^TfgDv~*jn}]7`fW;2fz5L*tSy*YS'զGsois3ߞӍרǝx)TSbԷX(SJ}嫜9x,WZTpJ B$P9`m1s찋90shvZ=U؉D/"16uTP8|PrɬBDNZɒĴ/ ۷ߖmYu1t GsmM I@XQh>:B A&3f5_yuUFEn^nL6tsXcԫ>^WQmuâ'~mR aZۂ{J Q:)n ڞ5Ta_lOs@oeQGYzͺnν'MqW -\0b'Gg.~j֊m1.-Q)u3ܨ,$C[\M.ouhk#gS[5#g fʦ L&0 :pNtBet< FZp0JlP+܊ְy8+^0Pomٶ| 5 G_sVBgTJRټWxƩ,,B'B4B5 K)R &fO)Îى6j۳-QnSLS@NF,gX%U\cB3ӜhPg JXCBehX)YBA:B)bc$B`\qYmCD@#e>pBcl醝k PGd,ĿfJz-W <˳S%bٻ?c~M={Z;^3r zP4G%E%.yƽA,p%iٽ-C!~xl۳*a37੒*%X\i4'mF+BJD(Q2%@qpAX22E*n0|>nUDI |!|pCI V0|n@2߲&Wkimɱ]KmcxQpL*Xi2 wk!%Ԭ }m)Jo|+0Rˤ݋찌NN!{Ee8K0_soxu&6tK~ KWYYSy7V9rW~:Tҙq~.3bڻTr<(+..ݒٝg]9;C:&g_M6PM8n4w~%`tGNDzrJ`W${r?%ͽvJtJ2F?It7E}m-!Z#y6utByP:'W9?Ԭ!8LAeꮹ6he#[n=}Zdn^.'׷U3x}ֱ\9)aZ3F\JaE%j1;x}klK;B'Œ#Q\x[dّ=eM&g / L#7 ]_69W4G;Gs6)hXX"`p21J$ĥՈXHH2Dh͢)JzсץA`eJ:Y5ECU.Re: V&cT>PE&qz>T=xbzT GUϷSl>OE6:}jaZا*KOM[ >kaZا}jaZا}jaZا}*}jaZا}ja`R>O-S >O-S Tt4EGLen'}ƍ "@\DA Ȇi5:֐:S!!H$/ A"S4GˡKRhH^FPДl*>1a.e 빱1$ˢ$O5H"<9!S)tNhOC8fy; A2ڀ8ߗ/ %Շ`>'>}Ʌs0%NECQmJSo9&A{HMZ"0rKJwz!<PF J=>XDxFLqOD$4B@ѥOa-{ߒr5z?_ƅLgxC+]ewe&.x&d|[~;dه>[nm~\5ƮL/ɾh6[s08\7^PϤw%SĴYz~(`{bLc-n:m[;&+ qjZ#s㞩\(QuSd50kϏLn)zP; UL Ѐh"bUQ>Ydld{wqҖCYg;g5J3u( >F|"Xr#v1D3ݚ %IxcAoר"M D'g3YqƋ&2 Δ'2E҄Dz2Fŋ j@_ pP(I!F:)8h-1Z/#g;^W&0>PYBMF5rmΛSBٽ[0G~FkjI뾆 S_|-CKLps"Ɲᅣ1IDr#P[IyA+n-Xy9R12Nq1ʹJ&bI3@tKRiu!ؐJ}P>hX_5^H.~3ct7]ⵇį@[՗A:DׁJPE_XP|l?pFi+^v86'}$khI a(8j|&`JuR&) rA8=+zc€92a` DTe80hX FGLb:G"0 ?}2Ó~Jn=@!Wrȿm&Ʈ{ic~/eޡa`'Jt]VF$ 4*Tr)XR :᭶&H]Ҟ-U1%'}9wJ p8IW3rNjYH4\n]ƆN7Fu :8ģ (64YeL ʪP5JMp> ^W())"pr)e( T рT%(l^Q4NScp@ :g(@,J@mBȭN*J.FΎ F}Ft-- ׃-Ir  ˬx QɃ\`#'vRReDdP; E4e7 jk T"4pJK,E?\9KqWo*vv{1lD/:%'˷9 tt )v9}|A6X{ug+DRE5iqF3F^N*rd" 1>LdBi;` _T2JR9 \mS'zP!P:% 1R*&R%c1rv-Ub,ԅepNQ.#wIi}jƟ~j5Y=]ÇxKl8&Beb 4 R&Tt3) Iqo'iʞ pgYlr *nGI LD9Z}Rln8 *Hbܱ6R"؍".IGFdS>i "Z=ڡHVYLZPRT&!ԐR@-P$Tȴc.H Im.[>{ B̂l$b)S"rRV"rR%bKZ(H1.F{!%FIpA f )!HAIRQ kH(UzknBgx{'yΌ;nߍG#?;3V]ƾ]>~%h x-M=3>דF ;؇ЛQzhޯp?YlY6!OLjCp9>cеN!&d͋ lߑZ@-ȸQȺ=!*jEP& &2E||7YIW}6` ݼN_ϽvD%]%x$r#i!6YNsk굹F\OY\_x[7غnt7w9otfFmͶCl`˙Ean7MW=?6=/9v}Ǽ 䊶=ox+ h ސrjΚϚjuئYnGBX\wE6D uZ+O6}ׂ9j(v! m#k@丩ׂǹ\I:kAsv䌢ywFs$JjWi)c4Qe*M̄ՎF~TG#u $ DK VH Rl4s qCqrs;~ Y~o!] =#+L-Sf `KE@Lū'«\2a٢+qS|[ͿPi\Cf2`%w+8 Q5"VT0W'`)emx)B{s+JgZN1cm۠Z55sL$]T7 4 uR-\9C-n֠hv9J+:!pjh-OoθJ1@\G!@N%M 7tE4so~OeK8WśaO9(D$qL8Jx102j, QEgb9|-xsс7M ~B x8TU;tto1?IOw=xpa1E:>}lfO{c63 3?:un(٧IGrcl2#v4G<͟/hH8i>JTN24s׀516cl Y\UPV2׫WU RU)vUG|#?+۪H1TT!Ey~Wl$n4Ǚm$%LoшP!`k!k/ϏF}֍Eƌ˴r##r 6INB(u E`\G(4騨ĥTL$АPL(IaMb% zV+9۽"P&r6sJ$[_n;ofvZd1OCj|cvE\k Axdʠ沀t)gUDk9cuټS-݉7Md&!b-MUҁ*7K(vIYo~s"53a,?wJFO&#hAo@hmC61D۪Eb賫'cp4}|:֯iʫ+pTPSi AQlR,OHXb D*wm$Gi7HG!`lMb`7ȇa)1HFlko I(QP1`[ {jfxvgtӡkOPy93tB8wwCM&s}yԗQON2 3qVXX!"HΐQj!|N#D#^q: Ujr1'H@(EdF/јBϬ2rHk%6{;GXWqBH>netR}ZwYOQi1ӄ_gc@LG e&" ZES<(iZiNp#΃ N1!"zQ>bBeׁ+S2AXiI`=Ny \waPd%SA73k/)+96A&JX;\YMh#vI#'|tRߚ_;_#bn? Gԃ@Q۞]OଽDPE'\L{~jaDA*\ʐLFiF+,1%SASt.cߦ{kinpwTZFK/or::j~d ~39Ҫ{4)r.9_F. 0-?7}n!6U"Pj`QQjdٯ ZPī-+&|$) hE̎,()8t5 ICD:׊+I5IGjKl8Xtl\7z½UI-SB%Bimf.Zen͟@QRbE\ olgTLf\Eh!ʖSZ#g}Ź6ݣ;xDՂ,CEƅ%fqvxb/]|94_z3Fq"0lەYAHtIfJ'Dts$'UAo V;gRO:@3樍uVɌeh1>ղFC~tDeXOY4@˜&ips礂 @ 2ĝPGaO&{MN fx{#n 0L][Nz,6]uj8)i7I\M'SUWE&%^j|1jrػߧFOW?PfI|s?4X䱘=7Δ87EyÓF1.p ެswm40n6 KW= 5;^?+JP_ p-::Iѧ:a-tH{x[RN* h2v!pL挜2jPbJJqD2uͿ59Zoľ.¸1{W_f. )\(nD>9+9W= *^Փ̐WnYYD+!n2Ig~pu8 3wvo$'Iw`IQ&N.9d?'ҰNfL2 sQ豫ME}FȲwr`Rx} >-] c_U K6~Fi W/mOܘX3S%Ѷ *%7zzF^ڐ'0k 6m,|ҒO ȏA2ѢgPƷZ#g&f3Gc#FĢr7[ I2S;rkւl,mq#ŀ'@GEaI\Oh`Uōt+< ӿ^v1<$2O ̓A"X`P`g%F`"HE(0 X. 4r&dnx7\0 "]֥Uld}q]? v癠(v}LF4 nAQ4Ω zI@NuœAǪdd[y{LpW.wppOph^hM Ct"EYQD23CE4S(5Z7_ǵĄmj^6^=Mcd>J: ]4 {t^1&:p-G]ZwpfW M{XGۤ'XxCÒĴrZ:f✥VMbpk AgzB~!%;kPyf{MMWGYI+deS3Fg Q4魶w[ as]ijICSOm ZpwN >}f붐p6g5(غA7G1 bJ즏ulњ[lC+EeUA|)&%3u1{=gls;^9T*{UXu : B}L\e9!1<BpDrP!ENǎHǺ\u.S1]7JzB1Q69I ~*1K;g)-c1wv j>x M~+Pi.i H6VLaVs9g^(d)<#: ,#D;2 \RRrZ7gJe+INMB-NLq)ьGa'VpKHb (}iLxgU*k^-VgM>RL`&EGfL*U ĕ~YJg}z<iݞWa#l 76n#6´&g<|W9hTHI實D4F}Ix|=᜜֯B ]Ƕ7F^ي㧒HU޾,18'ؐ$"!Y(D@=OWQ$'R. MO &!Yϐ}II8*(%4R-c1qv(Ub3c_[h Bƒ¹Dަ9Xomw|oTn0}'b3\0)5('hA%Pʤ& K RQͲ!]$p2sq&W:(4r;blbF$)Dߞ8- ab.6;ڶն" >)O8O04)&"mq&A9`:50I]د@Ȑ dсGPP"3A!px#q9v g=I}r!fx4"rR"rY"n,|MsKQQPZÈ0<.ZԅHfXB %eh(">}-a %EP9&)-XvƃFDjMȤi*B-\"gEtQ{Uc,6K]]d(.hBHI଎FPVI0C $)%:xx,g`Fjkx ~nE ~|!G˕e~|%K)4@G+o+8&ss^ͱ#qsSnV謥 %Hm59"ԪRsU u:zTl(bq*LЫ,tkLA! Cm j .ϣ>.:9}RNq-PVfzGr 3 y!Z*Y+ \^&)V:{`Ə&C&Gf.Qd19:oe \Pu1h捒(o <-1Rrҗt2c69f!錧z@M*QϸD='$@r)+x*8_Uv‡^̀,Û*dT^ 3r{n*à%` S}ո UF9~]L^ 0".P}&%RUēT F-U0j+X| +9: ׈%Ky :4x![(Ʉ,Ք;)`Y@-(jyUϹz_]5dV7EwsNq $<㴥!a40MLHBPC̨=eK-̅&YUbE* *>RD3LIPG<՞J!p.:^LJWG+Tqf1ʓ+~,11uYU%}5Ppԟ^n5h~*ÙfϊEO}\Gͮxo .lGGZq۹jNN꯿~,bKc{.:EdLeJqHy&[H5%A xx%9|3djR#amGa^m*NafrC{gꘋ-,HdEt|3H rcIUbU"dL4^y@lvvȒb/O\CC9& խ }ay9wCi ƒFBMG-~ ZPQ`"s'Cua\O&-jco0z'4zqk;>ܬ\gpMtW}U׃y4ƿ6aC|B-07j xZ.!Ki^R)>Y`N}WL,^>K+yÔG]D/ {%MJ8* Uٽ%L}wBVNYWG ;RnvG|o8b"h._|ve2W3W)JOJַ<{947uwV>C 3G&1qII ' 8)2);$Di%x7p|D~-J#mX9o8? ҏs{Lǯb8eZ%ApRH"y$bʙajb\$Ny(o޾ݝnj5M/^6%DoVێ߼h\3u:ܕ<! #iEc*c5VDO,13?hwslm |6p-A* Tq9Д M Dí#Z<.M!ԋ1A=Kctns%?y //T;}{@gτRmo%E`TRbsºYX<Du3 gH`!W.qZQ6 :Iz!`*CT6Ҋl =c,s>|~wDj"jہ-rX/QΔ^R<!H xUL ZBkn;OT+ZY0@ c(iмph(WȘ+$1DbT) ' > zd8_r3,wHl@ a"&rN퀀1e5D3TX =GŴ~cS5ʺ -p`hPOE]opV]nw! F<+pqx-yq*su4Bhr+}AU:x¹#8B859QߧzsWfr!7DZ]Flƹ*go^PQJGg|cg?n޼\p. JY闋/#t7 uwKՑ놗`_Q.~Zv4/gn0z4r[Kr4vۄ<u-v1޼.:|nq>g߼~4aK7oR_ikPͯ3ϫC{eX6Oy~Pͷ-^bioy\pK+|sl4R.SS2V&{1)%h]۳Âыk$2սGB^MլrS9@uጳuw sFg7lQ;X͛!~p;.p[:g,uY_3M!Hz'pkioe9v{i6nnPpEi mu\ya^wq$los'lq*/y{}~BR k?yZx"ćGZRENX'?y=.ևi&Wn-#ةdTMEסExtkfՁEh wm#Y+v0M̠;hL2_4UX۲dw<{u1[(&+dTs9_cc_0nj?NH7x@{~g24aλŪEcw,jӆQ*#,' /5#ooFЉ^~#Pȋ8 xpsӂATϥ\ZB>"D! E$-E\^ώT{PƽP6;]]`լ$NeAKOh^|>xRFq%EB[#l TFZ۾0RVw'V)lBrd qY2;# m+oFԞU 01۪fTuSJcy8bM䪄`*=8 r( 7&qQ= mHϕfD}D-w;_/G㲠l|e-xM]zΡTgH8I TA7aR2X2\!.Ҿt\!ѽ\uP6 IFײT hm+ \uG -MFW\!-m+ʕ4蔬+<7'+t*r R>vERFYf+<B\S+e4!ыEZ3-!2gqU2v5+{\ +\!PrUZ)8OHXd qm2v3rȕ޳9x*F./CNSkK2K6q ǵJfd.Dᅴ|vL`T0r'r&={7[gR*pd-qq\h\gh_eFa>(ඟk\#:v ,pQGHE Olf,:jJ#jl+-s2Ip .`NF˄GU䗭{Y+p] qL82BN q!YimzJz=ZФzesEO>IH\!UrF.WHY/W+)xc厕+$#WqVSvR^#WJbR]0XW #m+dDrAҒR.+f$ qYVDJ[W]+#}t\!JǺB\uV]:brRқAn Z$BǮ#WvϪDMU=W8Xi! /GҥeeVȕ!WëSe1e%S> |(ԐsF8Нv/Sn++xRXWl2=4/4Z5+~_}U +9c} jKJzhE=9T:187  \!.M&̴]R^:(W\`"+,B\ER+նrm[իȕЌ[\P4B\mS+}}受+dʕ\!Tj\!m+%i ^^EЄJiCI:6}RC=ZP;RZUJmJ+6kRvBJއڻ(WXBrZ'#Wk]9^]uR1KRN%WD*r>vwbdgWЈ߾eĀ!C6+ܵ~Vuz.{YT&~jC2* σM!!Z*-R:b^As ?cppjNM&1k)IDHٶ^*'c2\!pӧµhm\%BmBr&#WD*r/m/W"z\ .%OIؘd p2vBJnz\In Oi3/rw;)WJαgv&k4-.c|>,ō_ŋ4,.ފLo&]Y~_ KW.<gB]F".6:O8`p#ڹ&>eĆvNg|p_Ћ )64}/~UncQ!Jت*/e* MCeec6.2tѿbؕpׇwwx2pA}>%~@8XöDy}r=8W 23^;:: Ry$cqж yj{,l:(lZ*>gZ\\41p9$ʳ`)Nn9GWO;EBYv#I-/uqTCQɂC.(σwyn`uAy+rg2W:g>L `zwvhݵIJjqѰW0<V b i Sa޲/#aBK~*ڌ|JLȐ/`棉n? _{,܂u9Vڢ+o(+ql/88]\NKeisċk2f) GNF᭴!8y8ϧi~,7>_ƚCwKJcXPr(OKwѪ_G;-Wf~AefUFw:8$[j(!~kQPu>9X6*Q9y嵠./WvۊWҶȪxwǬʴؗ8ۭ{y__Z`*r+ixwYm^LWձ /iXgt|XZ f*C#>tsn-nx|<kg}'Y{f];!D[L..9jrw֘oddzG Rg,?VdL4*Q,3֭ulzrK|kt+)n6cӏNx̓ ?uud>tb'訿/F7Տ\Krko>x&փh<. |to2W9<4mj9U2uNVTg4{;g1a O޷axv1m3/0gLxe <`l0j~t۠,f ww|{Cm`{b.,w!r2}NZ-8JFTo(Y`Fؘⶺm8"V/X-QCl~eI'҂<3j1 ~><Ҧ1CNi+O0^>@ro/ʺ.+ؗq8EM -1.JL qgEmr{'-v|b/r;! J.zr6*Rz-1R r6*]pҐ-t&j=5ѪK΃pBpJFbP8qd8_q׺\A'/ogm}y5~~?m++xk_?Si3S^m^?܅lŎjZϱf$Wq0^}+x{g:H pfKyUlwЛ2bq4Yg$iU_UiK4圈HrAicա0Aƚڝ680=ޞ[ܔk K}1C$8sܘ\va <v_/`lfVE(*?9IFJV 숀n>/-G۬?3jBhq>&6Ff-AhFk+G) I Q0582F2c¨ 8K˨IZ VaE$\1y6Xf{Ƥ>me-qF\VdÖ)^9OuR8 p".Adm2M"Pe[ [ٮQ":l-C!J) .&&|рe>nD>99^H 3mC66~1N{C"t:7,ˁϴr^Rar8 ,{;_#kx O)RދsO1 wxxOxyo>3A腯x2G MF >#!D$@:`ejh)`:)h2\87IxCwNad5EOu@%K4*Tb /=EYqɧ!Bb:vۮofSy*{uyڴY8/R: 82XR)ЊH& X ͗L3CHl7T$cd4Hq :᭶&H]t* d*ybK3KugY߁dnjqW)Och7~!4:bC#hNkRm J!r"Њ&,))(\VPVF$CɔNI,,:ȱ рT%(AT^Hes5ݥ7--9/[7; )(,X<!&ϭd2V4DNʜdYY9QJd|I3,hO}m]JN)}aC-֝O"-ӻCae>Rqrgťp]Ub;9V/ʟ<HtIC3rR IrqQ>u,"*#yHuHBO99Td:%nr.*TpJ#c#AW)FƮX cpXx*UfƇԜg!o&~Ӡz4|$'FlqB!DYNAXiHB)u)5mMb%!=0AM&<')0A%vDDD҅Xw#Ò j)ڦGnwI:iXP& <2!AZeCQRz娐R*ڳkvTB%G(<`]H"-.֝xX0ag+@]+ CZ(s ĸH9m!$A d@%q[hJvAQyvHg\p^E=*iYEK'2vD>l^KkFɮqb(+B[Ic8DAS=A: H+N AƄMD*'U$%S4"A+ wIM)+>:f<\MR( :D,r4B"#U<1C*&h@fPjR ₁l"*G3HS0u,֝1qk"vl 2XQo2ߊtwPi|Ԕkg_>k@ĩAxA9L9NBQ$@h-"αy}72۲vQӴwS!ZdKgZa6u^< |ݐlX=sm{=V ѝoTtp5gD܇+k{uNAyiNNJz*Z"WM D͌%T%K}UG=)u*==mf3v\ z+8O~ +L+?89'b($&* 3T.Ne=%Q <5)ީ7B &E9胯oQz-r2o//dz?ǫ'Swq< iG6ozI/0;4_MmHVhGoN?˛*\ljʍ̝?eԩ;]{SG{(g7O{>̿5~{8y4䀣oV78ȫwÈ ӡʎk_Z[9v\W'ć}4G_`8TESS3Wuj)9s{_L}X B=~sf"nrW&dF^3jAfq`Z~Oc]A:WPGRolQX1~8ǿ4m`8e:|{ɠ$_v6H{ӧ6 翟wj]N..TǕ'ч={|!rcm3VAH6N6[T+pqէ-$Bh~!I/T{.Fa=*:C ȗW_4gzVw[lj@0nJ* b *mx03N܅V-nQ+Zo:6϶\v|lp)wm.%"kPjޒ. 3y"5D75I,~h./SvbI$NXW=v]XN9i2M7O<٨w=!ACaҨxKRKF*C1R\סNhP-kuEEYmm N.\"B{`!Y9h<*1'` m R\hڃ^#Чj/.:Sxs兝Ǫ.SR)J9۬>9,&#/+dJx&_z«ߟOW;{C<|2+"w4óu>roo>RxvQL=˿RCЖS,N-}u$־:hP@2Q5" hk$s,mz;v}fMxiZ䰻}X&EǑOdH=zVIRP* F4,DZ(''23E7mZV,j_3;/m-R3-I׮MEl<3,T10LW^k[qP9Uꞔ*UҞPnsIUzvMipGd:/&; J_[Kxݫ|)ږ;iܻżn8;rVVŖhQ1doqg&"byVx51uQɽ^Dh{ LFi0\:ͯ]X$ޭIgg!+J !(e ؊n963dDu/SJ_jz%vSkN2gSNr R.4rh̃p]e{1CK񠷳VG6=)BK:VK4(!1TR6;R{a.Y.k?(5*Ip1N;Y6nvEgfܵZǺ{Lp |o݆=#Fp"\>w*!$,.T@}U "/FBeٹVD܊ޛ ož\F4c)d8{n7:k֔z]ІH8' 2<}2 xBfL" 5|mv77u\44.OR5x=6>w'uy77>OkL*;Q+[/Ve,k+OVftˣur3#R>au' RZF8Cq%+r#JYP|t'rӘD "W0L9#j+%v-zW]i^fۢW)/-:=qtX׻eG%mPs|6M&~`7VlFUκ5*B6fp@_LEHi)AT`Tw[=Xiyέ'^X.sGtl[jQe(Q E9kJkX)n[㩘/~NUKiȩSQ6Ep6sFkCPj 6I"gCVjNopbQy@zcK`򵍋咏2\>~UM}?@?X-%g"Wߚ_b ^`~4?}{J8_Qj,~-\? %JR"fzO"{u@RR=?0:ٛy;#jVd;J$(DɊ E0׈Ŧr?,UCʼn8#elOYekMQiFGըQ77p6J O7~U_iEi-\^Kx nm'r=xlG`?;r# yڢYƢGt* d*6eJ{6n)iZs]-O^RAk,Bm`M|#O2j6Ec}&U]H;ӉBq Ȍ:U'ҋD4.5`Q4Z+%n{c*%wXaOzvV?3\O\'-=S{IiĖ=?ys((2TfYs0x{MLrq  90@_ ~׃|qhd1 C)AmaO[F*F}MZ|8ZTD>q o+!eB5qcYPv%Gq<<vZ:K#yq30ڟ܁g |*KC%J[%3UW׃tʯSZ2 |v?y+){FG&,N;e6`iH?67*R7T)QǞ7V KpKiw]*m*^ \)k͹+ت݉Tqt|WUZtW,=zp--X wWU\cw~8\U)"D2Nq  \UqqgK+;UJ WVc4X՗9?j5j7gG}g+ /^>6M}>}t%9FAH:A:'=C}+7 YϘRۊzۅ=Uן2iEVSW{r2߼BthcQ)*+_(E:dD'D!o'SȔ}46e!s%}T`8:LW@J1.>OwׯU~OCׯ~so?x$!9;P,Mv:|+/)4%4Čh^4}CIZH.9y!DЉC_JrIx V%\w.,|߽ 5Q[*>U{/?>6~m& me\=mk>k:.WLrs#@jڿjڿj'.ł{bݝY RڿjڿjڿQLi$}>tطMۦm}mˠ>ZmxhIXg ˚bPhb $a?%zH7#oDK ,yL6H,%!g 6r:tACPZp4 bKTr4",x ٣ʒy,{[}1CL;y|lGo'Aq7G/K!v{ vMg׍ЅnK8f9=5?߻zK/ΥరhB*L5J+  QLZp$iJfy(#f2ѣ #`^:`UD;Ĺ݂&yng;k_}5>ETW3yV8~eO Ə8.Swf3c 棍vQ7$W+{e:JmC8ҽC)hr rNgC@ Ṯhbr(R6.g5P`@I9 )Hj%"UIʓ ĹCy#do7]FirK{twp4tmFz'ѓ Eb]ĭ)Wm]5e dxThn^d\S.?L5@6|WO;E#j$BȺ\\VDIP,13X(leLƀ]A3|hѕJV*ZT1qPc%R Tؙ8#cw\3,L36B G•DǕiaNkNfqj~4~4| g!) bXȴGVH "hLZ~QrImsCU4Cu|B6u`ur@RMsayn]s;b8s1;ӎMQ:FmQg7jM閭}z5>m$Yyza!ݯ:wY;E.g9˴EwR=ujn[WC()4˜lỵM8pٕM8~A61  '>ܛ3Y$ E bi1L'dȃN]ቭtB@-*h,2&8I\*a5:4]Q#j >e*n[wEĖP[ :aFhNPƦ#k"g{dM)T``~Y~7UlŢZ^Ub~Lʿ ݠMT.E txHx88^]`9R"+^4Lp 5!Aj*+,ZT>Y48{(+b);K$XX1 0XRJ7 Pv16^MԖi=y zw)4+HXG"\J0i-aQ`,Qʃ Go:1rv,VwS)\ I@:O"IQTM)+YJQ*p MymjUZ#2@uXE0Ԛ)#EDM7{,|+ʽ*_f:m.!186D]ACSqJZR^8s&;~][p3sH'`r}ߜ]c@a$cexp j/6$Q3-9˜R`jcEyo~<ljPL{0Wnr8+Rw5Quλ|}tcԖ55ZJ;Hu$HʼnǀTa}4J`RGQJmƌ6>fimw1GݹW=]Xsu(CY[Qeׁ &O tՕm-}0QN<]Oe[!rFAng `hf)1H+h;U +dg!J@Ar@& .x9@ Tc 'T!Tf+c"{&CDd0E4^JJd 1%0li81r L65^:Ϊ;f9 _>v2”|DȰU;X^Z0X-,G{XŘEH)$J1iya};skzOՇlٜAC6Vf .7wξ9} %EOl"ɧRt6Q+Yۋ&*5EgRMc˷{RͦMu18Io2ɐV4Rs%hUR%*YavP)O4sT2TL Cc`#Kx#PPTs-yty>k8aRP*ʑH>H=ܐSjb FрGT{Z7l7FΎk\MdفAn6dzj)h'}7Pw}*<{Ok, QhoSS6#F`DKJgN"D"\.N0t9R`I ,Lj.e\)k߿z& [$Um'w;E-0b{iʽ g' mY[.zʕ+UpH\tnzO3g4]|=$1>!N9#0H@POAa*!5aCPj H[.:ӈIpݸAZ+!ם%8N,9<#45`xɘK5eB6"Fa't>:o[Y5L32Xó&CCۈ(jڲVnv'!ԄO \=([0}b\)΍e"2@r1=GHFϰH%,7QdI3!17\ w|;yQz ggA5o^g_o'&k+E*KQ\ET?{`"oy$ٰ?8y?) Ao>.PW﫫Ee*g^*VOSy/,|^o'ٛBBq/^.Q@'+0h#?-oySZSV*E1 p)| S] B:{{8#y #S1/cIj1)Y3\m寵vI :ɂ5zXj"LW,yI+s"^u㖄}aRm^NgΛAOnD^w RX+x.U.qm51i  ڮ#wM}_}nٳ2!a(77|p9P.uL>ΪTIAJ9}y:t>l~:o3N#`~O:;0Y>ոv",y>`NlSwJ-a^:: \Vڇ/^(9_nh6r\8;L.]Oڧ‚6~Wb1]W> /σזƳ'~<{3Ya ~n^=UtJbsuոe _US-ֵ`cF~7.O?/DN~:$H5AVvͧ8$k%z,*d gP5|M$w?+\`}4PI>ld]:/*A@Y=x`m ikxD&_N=WG%p1ܗUgW !jL80 ӱQI0\wA[<0IeS[&1~7-gLkZ,+|͎=ԫ)v<É&\ᝫ_W=-H;ݬx)]FӗTQip)ŅPBEfR#E&iţAQg\NbJbPEb?9|xg 8xW{ũ䜦8O8dˢvGWz]<Ϧa"76|hzv쭄q^$>y> ˪67얛J$R ]%TR3F63 d `en܆ҀB (f` jb w`߂:ICh:vF&ݒ/疯S;5Yk`v]3ad 2yƸ `1OW>]!&QVjx=a3LM3^3DH{_j/ jϾ^7=﷦X{mqf&Fa3AkBUsIBb 9qD<C,n uLm= j904*6qӈG꼌Ҥ -О|#,6€llMD|ngo!κq:]G24ntUΆ5ِ9IDtƐ^g1z"<Ex{h!,.R(*[]ifW>""BOيHX'\JlB OǨU1kQH µ]gmax Y 1rvN%}#o~qYctULىwbygitjUϲ43rAqJo)0(8AhDlj̨g0 l|srjo U 7u80[QTxGDJNcRy+ J3K`T3s=fR%^W-K\\VI'.~\Igh%MzjH统eG{K*,jx"}pۆ&lvG|r<但fvwC~@8FyERrg:ƛ+bha8m6x?u-A>$huJLoJ+y6ljgj{2?$oI]6ڭ`qVNwK#iy B[o^26VM%̞t\Ж{vSF%։ZmVZa>:C;jq~gp%%ʑ9̾(Rv#n/ת4tklGgRhY -=>I:UD񩵂_[-mCk*9~7ݰE*h'IW_mxa/Ƅa:[V(f,:Ǘ%-pcO~+Vczץ}2Yɾ7-E<ϊ}> GQq޹nN7]:lSTuXk^ܗEhg9X_"5z%؃ccz2YE͸⪩);LR j)f>t/1xt71Mu.Z`<̑)’Cz,K潳>CŁQ_’H2S6To+Qѓ <T D;|uYk'LD`fV$%uQk*6ymn ~+28irT 9kr3Ĺ6M :?_$j1Vy5%M_zi϶n Fnw Бx_t{ꍙ 1Ut4RLQV+br|9QnVN97SDyשX+`5Z56_!ƛV#18{9IC kjjv6$*#JB̳g}Jy(Dګ1Er.ADd_T 5o&l Py- {ƢtILw5 Rd{t%:`Ly;2yի|cF_h^K%69r)596Fӕ<:R{=! %W7 OKPm_c=< CgOg^sϔ e?2G/9ohw/ԧUVyua7ÿތ6Ȼ7?_;B~||~?v94=M\ݷOJ{ѷiR+ ڙ"?[H&wYV;5q6w;쌹7oש.0O sKO{Ykӟ8ɛiy{mb>7Y|?4|Zwpptqxwq2F练,uo,ė]7qv#˧u7襏rv:|5J5kqW"-DJw=n)f@«>-kU^,&$9p\0<-wr7v'KS}EX6')V X;KS<=m=Ŭ<$Jxܗ9t,%ӳk)d0U:M ڛ9m’yV blj318^@ĉHR8fWc~ZXDՙIDZ%JL >fR*qwq`DTpVrjPuWI>fnzIu26Љ@k>Z8'nZ2, Iʃ55̬$޺3 \)J]?NzP2^)\s4f0η;eX2cY@Td 6ܞ[r`%D`9ۅ%Phj(f ,xBgt聭t, jԃO Cn!Jűؓ[BOnFA$eކcW%s[\x" b eNѪκ|`|)Rc@MFGO4YdFQ糆v!o{P"( FeQU"pА3JEVEUzW_{q`0id!?tLA^ma`P!tH?=J͛YX!\2w,ZYTtn_\?$2(5]o,'jѿ>.m=0_p7!rU[{%˿?h^\Q$&LLvx c\_ 5QkW G\ iAzBB +B +B +B +B +B +B +B +B +B +B +B +\!O+؜ Wyf pRsH WO.+B +B +B +B +B +B +B +B +B +B +B +B̜pe,@\Oi>zBWOb$oD +B +B +B +B +B +B +B +B +B +B +B +:$p8r\e8*kW -oLG \e)D)W2W\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!p.pN0r+p%5{yw槮(djnaIa+Mϙ'f<sŵTH=vsd4W"s ,xWY\~2UЃUᱢz: d@`Kɘ, b蓅d)gHzB(Aռ{Y K_8%\~\|ƍ ~xږUM9os%-h : b1O,1|vs~{[&u# s~\Ju7駙TJ=ſf6*<&)]'J $Rd$0MH$>i"༒|/55/P=뙑2F/_[k˫k[ɩpm. ~U_V;CYwh᩷4j5}{PWUq=(-nF-8[gP_L!L2 %+#:[qъmO,A8fOL䴳9W*żĤL K3ɸ|0Iaɲc@/YGgŷ^M n6:rŏ k~ļ1ZxHYwzvyv&Rf#0Kvdύtns녯vܞ1,r{:˴*&J9g~AiդʉF$Ivv:o} }w)xYEvw14m@7K^Rx=f5ۣZ` }* Ym Rrp .0,emMaak]wX //G{_=&L:kXmn$X[_?2ئڟ \ oLB@Y /Дx&K" $#˂TTZ-Pe ` `5qn+-yr?U1Ef~?z];`>'r;"{pIsa$ux>b/MYK8-O(ޖ@JJE`ȬGa] !rh0<_a.FJ &8E9gNG(ecd"DXZLVXk^*{W !;C u/Ȯo ![0ׅ֫ x.+7ůьߌoH;-S >gl Ϋտx]sjS>cB>埽ǷxWwٯ?`Л5~>80xU'#v{46 PtUX0|uu7|^nJOwTuor=>)\q^]'6^{Yu^ANj?"1}'43gчn?ξ_}E~9_;} '&ӟ)LƯpΛs*O>= ]VW?\BZ+y1~!7~_KT٢@ nUoi;̧4ޕs•ww?b*66ZƢKWKz 5tur;U|7ߙUy_``=nv LK];r;{Z~^ϳEWOmҩl7;Fl v?bWٲmJ™#fҋxR82t f25 ?˔Njyq`AxDu (}{P?2{ҙ GݛI[)xAu{QʠL[kcx&f\UFU֝v'y Y6Uphy&c80w< v+7p JsQ^YOӱVJtL!KS?L㏏k_~Zx ϔc{z*sF:t~il3>?s2B?NїD@<d0p*{%l!eke b&CD;D2jFG] {w@׷W£U 7npt+y vFλ۱ DmM3(IVJh+-Rv=X { -f+%D(ƶv;x`@`,Eab Gv0ns{l^o@ ^2 .g#uԄ瓿cJ2s:K5N$D2(nbM.Q/)CvӓڕB !gdT:2MгvJL@"d.?>mz]ӭE} Mu?uq*{ǩ])v`,uvQ_ƙƸB O.L a( sH객 X*iY 2N2ɿ;`uqN1j~m㓋bW/ۍbh\(Gu e"Ze-;DpWAEo3 =lIN\\L!ǟAԙ]JɡqB 0.\@=RMՄtaӼgd1 J x@(â+(.*ZFA!}{]j/s7,ƒ:G9n %6h}҂[UQJOuLy=?oYέ&b 담]be}NjL\?7:_KvuӶyӿ>80YM8V˥8Y1{-*s_mP lPJFwR=K%9IFY\kOtRQs짓f)9I,Im[gkS6:e^YT ZF[eyy):i1yԚ(c$r ;QZMQ[ OAhy5q6_ y ROÛ-n˳\klp96Y-ͲTAg^]2 FC2<ϕDxvԗn-A+ R:ЎpQ/a0_1a{6?Sy[r*1%/ІiԆ 'Jc hJ5#$eZyR(P^q.OmDiIHDʐ [aD hВR* @(?_jhKGY 1B{%0`Gj#?{W8?aWI|/mo A&q^vL~,v%V@Ӗ\$y {R%H1¾0+-)M]_Y ,!F'E 3CNPNݫ\Xg*WLm3 GNmǗkn^S鹖Ɵf~suWsOYݰFȧag.U%/'tyl3/=K@(8b49? Cpwչ4`eXލsw |mf ׷v:Mӷ ]N.Z^GrDaK(.7_Ť1gryL'U41?ź*_W|va9/|P-%InW1I7~:Yćɏjlj/55pRIBDm13UdR=+g:<߆|YR<wHk9x+6ü]\#^Ağ6CwV)\ȷ_jYh_VTًZΫpRW:6z!Tr&5bot Y%U_];lY\nSen?}#+ÿ0h~T|~}S Hכm S4oT5iw{ux7ќ/ )Q2XzdL$S)S6GGE+ӴCcT-iWxj޵dR.ې6y!W1OyP$t̏Az^͟#4|(Qz_~ N1j\;o2@i#64!ohRp7Qc.&±'R;Ԕ]4{QDQ*s!GWnsx!|7m lIkү7ng^Gtt"trgDZYClƇpEhÑ9|"1 b(sIF [!dpHFsXQ&H+_ј 돈#e8#V9 kIBVN8i5z|kG@J`z58\3 {%K,!UYlRJ_:z_XA*` jZis^/!pY7ڲ7X3Ugis)ͨf~ hlR~QL(B\ }w*-A-}b>[p0GmbY1If(AlӥHE'MNoV:ߺA ߽a;j"[!f=<;>rǶ~z'zDY`ï*YWi~iG#~|S?mƖ3ּq:H5"p$d#w^R 1}'* Ш C9J,`V&'rgCȩ@hCDr&BpE)}ґlMQ(0SC>Y"R"W$җI{2&&΁.{@K@5{ :I*ocC4 QzmRqO &cV"FS6@8Rlb65nd-eShDApN`F_nj֮/qgMIOoRbʘrVݱ ]׷ura#mrWF|r,_ShIR TY˚3 %|V&J)2&K2"x3Zt% tv'R# |NX xǘKZ4 ;=co*4X_=B}µD_*2>.Rw]x}qe?ONooc5(c( IX!2.`KJM'?AiUݦ.ZS\N 9h.1_v+y;O/q^.mCڽiDZ^z4zxE 2] *ZZ1&*]+P6A#E| <.pPGQgǾ&Ve2p̵م!D:{u&ҏ#=zD-F8zĽ-8^s$6u,cm!`DHŎe B)q6%NT~&2 *#G NB|ܞ=boPv:ҸMKg(G8}_|tJkuFd JQ00@I{ ΄(/Cݳ?TwpakV.}yP/l@'"k[R3ksdž;n5c*4 $c.Q' a9~4ǡU x, J]U%{"iuH!c682vhHc$٨Ii_Os<rD@Q^QnBHXP6'ҵdDRI1SZF,+׺3I !&?vP֗}iAYak'`n62gq5AUX9S0NN_f>"/2A%OGU\$C z{39$%N~d0r|,S`̥};6ݩ;]ցC? W=n[[y5?tgds῱Ԉmw!>g7y7e9Sӛi >N~}F8s>:o=ѼrW&ۙ;7zOjnuꡝjtG&jfX$ 63fNisDdzϴ'v]蛢~>Rn&A,n$р@ߐ5q =V|&g)7b1bE=!)>GZ2&19KÿyL:aW742$*ElAiL Q-%rTݗ8[><~7Ɗ] }U#a.NGL09y*Y1#^=^Z}?58lΉ`Qi15nBи, D zT$dIȎ w%)z;N;m"fGETͅQJp*'ՎIrFFF~A pdϸ7q>2%27q,;vBRBHRʇCqf!蔕75HQ@k?]w@!Y-ŠuJ( ۋ8iJ|JEL 2N'!fĦKaPY]uubhYx*DG)]*rDX#r |1I:ȼj@Òhj?q61ذΕcBȐ6L6F9XdhcYu+Naku[VMIӤIpt:kԛ/>_t~_}l$aok6 ?HE/WqU{H2'(ቃXնB KMp^i8,ZksT\W[; *GkD mEõ YbHKtrARk**{CC 1MQ,xF,%_Bۛm_۞sjL٠~(1??Ʋ1z[K& RpX؊S:LFMB엗I*\)-狲L&zTB`TX)e՞Dwy݂o+ >}}1EDŽW"K6_Oi(0~q,3~qUJco_ղ6p_'uW_'-]}Va+ w:[ēo@6rq _VCDŽ*d?0qVjfWlj' !?PSi I jE+jۧD0ۍ\qݨuo'ߍJ88'ғ0½Lh 5'!f6Ƈd`"Wo8q[IKSGr/ZEe. IV;, }"$*A"|QD}3i_3x:VGJ 祑mcN>; i& 掀w$ SQF1Wt#bɓF! X,zL bR(:U@fG*,BE YM1PȩAQDΠ% EIo+wݕc(=$HYK~;g M)*"<}k L51ກ~8ko*06رIBgj8q KҔ.̹ӏA|yv5L~صwg +8lw,8/O5czkG8#]BQSS/5IJ܇N#u}8n,Yƃ E1>LR;T6Ҕ;]Ǻ ǭi,h{ewGnHe lxJBQNymJHÖTKQ  JMA26Be,i[%S4}vy&_qoْ8 bƌZ2@mDy%n^lCس>wYZ;)",8v#0-(2 B{EI'6ԝ T.R*D^2QXf嵣ZC{j%b<(2&gpTE>jWgڠqX#*p53Ok`G/5s6_J3f4zC&? kat5?yf%~`i `~6Sjv59_4缵X}>Y MSs ԮÜ%\q6{҇ASlӺ;pw/+|<@}|>j5rg6a-|Y{,mOuPU@ӉggG5F|,Sշ[CD\ Џ8B~h"s~nL>;i5JM/C !޴ќIdw݄l".i-uCȿ#G} EsRs4 PXN1;WR,c1#c:I22r4={mhvVz~?FȳbrTY-k2ūP㮥fr%|p#ڗ]b>w}ɱ$tǖa;$}Ml%I0<`(Kbh:Y @٘ҐGl }m-c(_2);zFqB`-ï*w*+{=JZ;_ ݺ6,nx q1n.sM&4SpFQ!.9_P0$c_ghU -JvvFz4}<*+$T>~d;̃w||7~EM66Pn0TdMj'ϼP 3oth>y %yL. XbPꜵ$XU46 FLQh(0L#f!. Hlk>=g@|:tF3QM n|Uǖ|r3mAķn'%8yuN5e+FZXXrBP:Lh J328(J% iK\)燲L6zTB`TXJg*Q"i ވ9mNWgŧh=G޳|#Rk-E_>4whʹ/4FisEn@4MI6xڕ~ePORcs(9r/Zw(44wmN.0& +"PdEFS(HS{!gErNHR!4&KUDJE꒤/'uyC>"lNv-ݕq؆L+w1PdDk$1pDvI!H 18' UdHP(f[PV֟R*ʘM`!ajD#C 0v]:+r|yaXo0vqrh ۝ɞeOvQv{xv펱e˒U{Bk%rq;tڛoT+?Qs$)*f&LbB)ePdj  jFPR*!ZlnZ*V=9bD PP}',x`%RZ +5cg<#3ҙ.3Յر.^.\QT6\fp95ӯӸ5?]ߎwU(kU7ՒA->pBJe!dL_J.ɠNl?{ܶe ʟCCUcv\q\qe2#dHJ%E"`af"@sQoU42 3 j-$& "3$^Fґ0h.DnV|x v8K&fSXٱ/kY[w)ح3l1`Gu^y0N ,P;G@ HCʇ i~XȠ 1+ k<$ǐ7"c}% Mks7F}|HFk~Ɉˈu1֒`IR"τdLm@4 B@A$ &u #D e($^2XobbIr,i$53r ̈?} -$W:k%"qNj/n.`$9WF 9+Q(B$ŎMfǾ|HkCUV/;O[nEx\x$G )##nNcn9Sp:\P c93̛'55/h9a*eb).pϥ*x*) 9Z85Fn(XΩ7&*Wd"~/x6#"xMS6lO1hk(f `a5z0:C({WJd[^,jy8ZNWdfT㏡L=M-zz5:>80^89s(>wk}6zP&l1c>nK-;/LDsj21}P]6 uH9EI'|42ɨyQDwK,[SH FoHR(|Ad$P6://%SLM: tger.1u&{Y%܀C}=!qJn֝S֎]g/@n6-#̚]^/,Ab69.ƟZeLtfAg͢ChCinz^4*=/>R fnP_y9:*s~bKޒi>5gß55mnMQ\vqG9+a# ,x y(KY͙<}_Ǒ[l)Ю(gH\ NrɐÝDKuwv#eFEEfy3Z1˃ NJX͹)) s!48Lj7`QGCO!neqïjU"bK(3ø)aWuUuM oz9 Z_o>يe%"=)ǖρw2U71^=^QhWodaT} 8cDr&Lj/p #U_jx%߽T%#E!D[5*GA$$8G#AC,xXQ No?p|ݹxЈ,(:/SA%; .a|8 ӫQsR4'tKF^}QzBoqmH7:ѥ vkP|Ϝ\ɫ4֟`9Oǣ~^rk>E5S`__}1f_Tߊz7]&٧cv5(j:eʹpfw7-bCZbk U cME@u@Hm Aភ6O@>z$V\\{sgb04 `u{7"]+V^Kچ|_Z6 O5/|B-)AmPRkzTǒT.cWCJNJc@N`+)&CY/ҩɋg۫'\_aqp<.ޤ($ |E,pUݺZ5]`ۖ[ |&o-v^8 nu.?*~QqYY͛"t56+#Mx0_fY|61 VH7 p.JO2eH *Yߟޏ".<45IQksfȵ1'Hk#:̾i~U{QxQ}u6 #.oaxÐb;Zy(k?ͯak[!rFsSRbr叫Cһ˽AiP5Y;f6FK("b"="X Uu*ptud =!"_L.DpCLD n,,xGI4 GjvJ N ^IeV,!A,|#t5^VP?<4HsK f@9-,G{ŘEH)$)54!gVkXSwZlPc$diܬf#7LjUjJ)于cS883M39J`FtM\kg̳ϕuҪ{PidVʤpXP14&1C6JC((Mrs-ytHM]24E*Y8p>=]N`>v#GSP<{n2k{/MR"Zcisb2j7©f 6#F`DKJgĎi.WӸ#@ j?ÍNPXH*VA 2wZpPH ]hk1\s96= (yqyn}ow@xnF#V?8G%mY[.z+W&$S칑k,Bki #zg8~H=|uC3SJz |TK:" A9* m蘧Irݸ~7|; EGf֯zb  rTP̥L2!BKPI0H:ݪm/ֻEaJ kvWQpHb.PEu[v={7YJA֤ 񴀫F/B_9+%p8`.G6H$è>j{N{"d)l%ۋREdQ?yTOsIV V~{_-ɫGͿ}3J-S^?^>yxkW>j _MsX}eןf}e/k߯aMGæ_&pyo>T(f<~nx8a\ϥ;&gf<'EogŢuW? xgKSt.%e?^_^}c Ǟ̧VL3x%0tuxE:_/̻`yO0Ko81F}_ zo/VfgT\E\Eio9rLEw Ԉ2 ?GC_>{Wդ;5gcU\@9Fps|=3+nT ,[cV4La 7=P)oR~뾸˾e%^ (4CM`xce~E(,O=f+Q+Үǃ'@<ͮ`u+G<)S'4eΧ\vGW#ZH/@FQmnYGa9Ԫ: eXjԅ&ؓK@0R.NHq)?{8X1 g ?)&mg Ȣ35I&h"X1bhci*FuCck{ZOKD-ĻP0Aqh,15F@!A^\"d3`_pf(CrNưKBz"V6& \A%\p qb&.ا6Ruy꾂k]O<{۩v=*(U *2J4Uə#$(\E$- Dš,Utx,SPbu)e7Vk}9^Y׵ڹ[<4Q[$f~.Y?/b^+k)%k"QKySĨ6xΨjgcgOϠS;5z!fW$ s$FKi` R4-AZA^4rZDWUKZCW -GMR ]H5tJJhl:] &]=Ab`up9mUBEGWOҌ*5tJJhn:]%;zt1Mt!Q*=􋗇UB):uЕыtu?G7>Zr$J*EWt%: zb!r>[u㩦r^ ϲ7 rIAr28=C4dl<c6 kհϯ~5 #jEt&f5ش&rKS6ŮQCq墫 Vf+R`wtŸrӎ5óTKgp2HWD>AU2]1-L^WLiZ6+6:v=i+r"J+9 QY{EWL;'Ljrh)b`'l&2-uA?JPf+bNC.pȦ3ȴZL]WLioGWc;>ł1%&N2yǔ4 j#oߒvJGezѷ ;~cmЏM,np= tHWFW;hAN]WLiբJ9D/3;c"+BM]WL9-.]DW 9EW te]y+v u^)"Sl!<׺\tŴncWDَ]GWƣ(w*pbJ]PW(NҷNWFWky2ȴ8y]1[Ʈ+':ꊀqsӺɏ]%ejB[{R+*>~K>lַgXSe=JSY?n5e/k7?\_!*9m\]6|-]3360{m/vTVZ˳fxջfuj]o6I~zoN.$)d>ݜ}^$"B6@Oj᫿mkbSo\Ɛ5+5f3^^]lٰ.W:ת$?m0BoP~(?]+lfVc=spsn\[>[Aoj[Vsi=X{' ?'ۻTQ Xt|Ӭk?{Ǚب׹Gݾvw\= 71u~`dC@ÓJj)b{JH^S<\'}>xEM'iLϔN ;l͖}2֔?kp}\y*A!յ, AAbDr]4vC>{܄ɋ;JBOW[ʐ +UIW)gIBYL_w\d.t1GP_r+r_7yY7vwUHNsnl,fmާ%]u݊~1z_+E(BU-~}d )FŞxw~ⴤ|Wʇ׫ҝ?_v/.g{Rs<]|^w,qM}16tYqfpBQO6h&x' pO'\zlM7ᖻ/1lvߋKZ6mqxWv|?x|Ѳ(>Vu|@Ӫuyg!|2_iGCOxv:oVc[ ·4A=ӷ[/`C띕|PVG AVnʇgV{;}Ǟ?;9Ӟ~G~׍<üh#lx{<Ӟzh-x=+5!&1O0:e.ڛcoF{6֣Xں{J[B*9 gH͇yWd/ }`wO։)2Щ8~(F>'ȝ0zwEw]m]tuP[6/%Ut+^V_1Qy|G-R@eQXv"P)U"ajFZ.BMu,$ 8xF@=[)#DQ@JlKN=n`ʉ,qijKǝ̧ø sӾuŔ{D4F*o2koႰi+Kt5G]3^O1p(]1I(]1HQZ+cЉ+vh'bZSSjHW^ e]nbJPf+F عltEZ\tŴj3uS&']DWky2ȴ/}zUSZaOWeA.zxGk|~fxPIʋ}"V(\seUZiwMtk?S¸\#MIG&6%]ɥ'whKd+>HPb\rѕDc=GjRN(o3kiAN]WLi]QWځ"#]10ltŸ+ővI:||tkhF"`\6b\\tŴ(+tKt5G]ŒtE^B6b\MtŴ8+j ue=U銀gq!v5޺j>rBȬ `>Aµ+E3 D_t5C]y^tC.9y]1%,vt::h/`YWpѺtzb)R.z,LqI׊E'S苟*/#jyw{}]|_kW贈Ľڦf>RU|(ּ{0ѪG݋ݪT]H>Q~=c>{-{4+ ZCo< oYm)) v5EO?~g_AKEL_#r>(cKW*h^LV'iȨJiKχٖIʚں m%z9o]_{$Eak#ng)tWAɠZ1 :J/k=$WH܀hy942A(;cU~K %HDIY(CQнMe0eTUԪtaP6T~9t2[ϦOgaUC"V1ZDS+/}D^y@mMPRH4 Q{i !i1P+)E*%]Ɗ K$%#z uޔKeR:Z_]}j_]kcX*m6BJUm!KKLB&II{jT4USB3cT)ƬMe=.+Uױ$TP|]!-gj T\>OqOeDڗJYFvK05j BTT!TP,k**2% u4*Ckuk lM%pᜐԙT Ix3.H# J*67J'TєҺI־Z^%}ue I Zݛ{$JHe VPTU J #w$W{Y%d\ ~'҇ &/%՝ C\ .P:Ru+zZ +lY薂IV"j@/j$`WjmIR2"ы{a=I+JWv "'+CP6 ^igejWRP A1K,R-6ĽBʀAsV Y+(;i^!I]b)ko{2LD@_*E4XQh>XgV*=$,`IA0 ((b kQ;ush[ӯjN#gxQ$  EyTW-m䬤/s$J,/j1Զj}uPpEY -}m##gAzB@p tSZXԆ"EѪP5ݽcH"("JPUU`sIᐰ5yXjЉ* "2Pޑ٨ۊ npq ut%H?$,Ɋ|䜤ѭ> dK4TeZᢎՎX9 t3 +ytz؈^')ٻ6,WcmڪC@0d@>,&_ IGd;=դ(lGlNID6ow::U^j>`" T ˕3Vf A[|`ۤB@ t,:\"B(,to3hcBaKRTQ6% L+*VT r VQ܀'J [e S8f %pn7XU>Y,#KQ @+do y"RDN^R"q߃.PD mw`=/=kW(PH<_U0VrRDāb`h#"HSM[2,pZxZ댉H@g:c% ID}y2mĬ*$b |,5jq{S`Z>L|ХrUN6م6k-+^o]@jГx% o Y=X2QWтҥ`ruARn҃ÍdHgP j_j`!#yPr  2QӲB JP aZUm4, `)حL&P܊`m@mG7 UgeSU_}(~YQj2JRH b|OޘT;: ں!EDkC4 w`܆{҂^ s  QM@wAKTFeL(۽!X|*#m6(Ze 5m n9HFgP!t9 V53bhdEr[< 4ki"mP<2Ggm]Π 2,6PM @)E+YnP\pYT2^N$Jlƚv7`lۋmؑ:<8M`QI Pf)4ͫU庽zˬ +ԿAn%LzH/q5$oUBeڋ!K֭!hLE;Hc<[-tZN(<M.e)+5K xMW-d0h&xѹq0Mÿ ,E2V]hZ\ Gɪimچr< x=J3| g:%<%*`rHz( Z)5DvJTFw  $0 DUZ;`7zM &iB?z=_+@1\ƒr\(\1{Bv%A^gj[V8]}ٲi5[Le$'5ǻS7%"#feOe59 )jan dMxB"_puLN #BnK'"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r=['fX?"'P۵F8 X@m{"Ja @ ў@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNgB26&'lcWrFp ᪽c#1tcvW-Hp|ʁ(5"BrD쪅;vբE=3+rDp_ѣE~mDimcӫxa/ƮNWYkwf,<>/9K<>I@&9~ ٻR%λ:ek޽/~__U6 Cvyj |3Y64&|yH߼߾tɹЩ$DYn틇'jg kLS&^nk3ѬǓv^֏ F 5aXc&'ɫfrXhZq b򘻗\[soύ~zpO|phO$s%76*6=m CZv4pբ Qf\=C4&vにxE](=s+i܈ [GW-\'Wm= nZ\=Ch;&1VaaW-\aWR>Ei #\ l[7BUVC+D&zpebLp棁+5ϛAۖ?vD)5s+DBU WfEZV\=C+'FWXk?jшOdhӮ8ͻzJ߱K&OW =W=_ƮԻ+}WnzpŬug9Ii߽o㸱< j.<MIo;uYu17nf,Lm3V'gA=smC6d WS.*ڲJD[ܣZY+Cқk..=튰wɎ^bޟ?wYḦ^>pV8 {ou:y5;_|6;ƿ$7yqӎgua鿒&Пַ`߽}/Ώ+.?՛IѲ$pTBJwpA(4z~.?g:9=.d9=+?˶7] Ïhnu[zs]iԾ>ls\.ߝiqf ֥F9dT$ps̸N (E g⿊יcȖ|, 0iߝ=l۩d-0'<s`mLd5(E1f&JB QLhk{]Gm*k#^vb6үzn`$Iȶw(YZYbm3jI5`LeP1?¹]YtCPH?~!){0e?fx`^"߯ΏAك;=@;:_|<ڳ>J^~mN= J$#ٺ3}5;tfΔY2fC%D"ry  O ,ăC:Dl.X˂D`eeř:*\N{㘩4?ȸ?Mj|=o Ӕ'op[U-.~;#U§>>ý*:Czb~t]oe5p|0rٗV? yzyW6\ r8YݺOU,f?fl2hyO=u/ϮV7.OPLf}.nvvSs+$t߾9~ﯦoAt咴RMaԸ]Uvy~i9%%'qBYy ,C_5_?-NXѽZMqLǫrduUdPoޯtIT`,:||;@1v4lN~$hxPX?p\n,.8P}ql{=ΰ *wwTevwowUث){s t(&) Jow]E`ލ|+&󫇄O~3jf˷ ǔ:dA >夜#Ty*4/S\SyM<A呗({V(+{\s;]t6cUG^ 7a8Q2PpLx63潡DYٛ=Z VSR@>%"A E9g8M,pNFJpaP"o R 4^+B/SOE)oXC{kfh'4ZɎVn@YwPqaeҖ+ytu o+d+5.ǪW컝e&6\_=/+Be%j\yڍY8 PfK ??Ҵ흲!8 ` O*Q n'wxsZAOH9\Ej.0 pG >K(@rk*ʭ?"y򒶫Mbt`%oV:ː1vPGsOFT)bGfF=+!2 2s'|[:hPg_U=ݫz*7^w}!U1UZTq\έl‘ ([@"}GciXvRUI wYidc\gi׭{"4I׺dw*bwW>ŶAh8HKU2IC21Q}\q {V6x$yq :2v*55]+ * 0()h_ٯ1{dW'͂t~6u3xϨʰ` JISv3DDu/u _꠵V=Z8vh8fMI: 6xtsk4ob,E g1DA@Uv"YvN`ް=*i!{js2q*D2@KA%eCK VYyBZ)3`+ ةhL"ۍl;Dw{eUB|6/޹"'0J@v.1+a`EMzJ`q,ѣ5-qN(qoX<?GZ5  4q01A'-u=> VjXjÛox['?]B%]&Y8q\?'+9nn^}n~O>wm86pxQ*K$dV{lk[i^Rq5u' z2ZryIWkބcm[~mF,aTai0EJ blDwlroT6: 74H WI'\hE& ,X'ո?PI{s;NF?`8߸e.CL<ȭ*bLU6b˹"X " -ZD)c˟["N 8BO7x_8`ׂJagr{$*H%cB39!JZIG ~<2C-2a6aXXg$n]g>^b _˻ws #% ,zv@K 4r|hT9. 6GT"ܲ}bv?Sb/"xSlp.gP+FP!TJ'EAÙM̘^ R.-Ss6$jLqcQ E%<Bp,`U ډVD8T8QS:3kBt:QTDre^x\4\?AZ[c (%Edи ,u8KFhI5cHДK.B̂`npGt7$aXrF"`X"Ҭ %M4 _+/W9(P/*tݢHܢVgz12}\ (z~TBW PPh"b}:nfW׵n?%Bp7axI *$k3Z8 ).yFۯA KH!_p™"1T=< $[[z msJJĈkhy:%* *7F+߬/F5A.}ewg ޓ9ˆ1^\b2bxW7m5;|^qt}]n†ZQÆBuTi$.V.Ђ&,) =;oUw۷y|[=h-)#APD E\)pJ`19c|d0T"R-b1J&ijb (w[ %Q"Ф69k5IX-{Ev cw+9}=^ҧ5UNAȰJԂ@ iYe`ZQr"j'%[] eDHG4ςO!Z橏I˝^4P)Eo;wϨbVlYPcЧ7i.3F.Ĕ<ߎvoir^uq+zͱ{f\ɶ-ތVI:PfVkTHD:b|!)Ʉ-U';|QjBHΑl EOkB2N BE% Tے/*(PY[G{_gd|}I xUyS_(h48\b90!( KXo$2sh*$EhCVlζ#_Ы;lF`6\p{6A0ckm )9_Ր(ے9"6;{7U=UJSZ16uBg!: L4!"|C2Er1sFc6VjW ]o;2рut>s[I)#$ֵ HqPƺ6tt^ ,C E&P8{h؛(9}boLΗy88|q1"j|X5Qzo`W$0x3(Nϵ*{5<#wk7( jTяxs?!>7U<@:?N+"r,v=>d˓Uf>ɟ_{\?һ;\so~(EKi?Vf/^*6b7]ѼXJjRWu)y6wă!l_x:>l9H>7Cׇ <ՃM-lPw[I5aMv~X}4`V'?[6v0zq\.0+WuW\G) o/Ng4gϯ7R/lmjFhOyw wcߴVMg;lZAG ]>Ψ2^3ir~Ge#"J|gB284>Em+@#QК=>*11:-GVM^ut^Qt]LƂ)Q)'Ib®|pw7 W_g!΄mRϳ\>dO9=tiŋ"lP]Nop~q}jђ Ylz4Me8Rq F?o~wwtOFW7_g[-.fk:_!rIgdW*K ٚ0I?_>:oI֍D`9$AĿoR-[CKfZ`S,r߽Kힵ1Ǧ#yWw.{\zpXdSf g?~y4A֎>܂-4ue˫0t8? =nVG?|z/昇W8ߞwtdCGn^.~ڇn'g='_/=p#m}xomw;Eޙī]͟aGVPp347k%ϟvS\/aꋁ^ Xu68d:'J5ڦWC,ׄ݀QqW}k'ZrOr K 2J H2ߢΤAdeSyۡ1A;:%io-&zu T{ط #LtU '6$Ђ]޶Z4JhuȚC١69e;xdջˍ՞}U{Mj4S?.jLŇԻmSՉUV)T~[Y=`s^i2^Cl@tK1޵‚`yUUi+ ٌ* Z]o9fCVNlDRlvK{ۀm{1dE>FHȕN@"M%v_)dlŷF(}lz:hIU Kr}O.Ew}Rl{k;4_@y|F"ԧQ^s \ߧK6 ^B9m0:ڠ19~α"P-)QT3/>TY2ڈJ*M=(2J5FCۑG[`:@w#ز5  Y.KttAٞSTj~>i(<|br09;Z%h1{Fò'O=4Yg> ;t7 5Ħz:f+ Y3e|A.,Bk9Ûp.v4{eNz M dg(DTK (Mq 9RZĖ.J Mh}}d|ZR&XҤ8tn c>so+v~?S"+g9g cISH0(&c ݪǗyVTSXV^1x2{  \\W X-qE*WG+5EA"t1bJU2{Uj]#QeA"n+wbbV+VT[A +l-W,c)"Ve+ViD WuE.ǺbV+Vm"4j+\ޞ 2K[j\;88Q-Q*#pe+쿾:e{@xH^_SQhAǬ% wjqn2X^-@n-/|C%5e]<}{uvtX#  ֮7-WZDlJU)\Abϰt(\\_̪ "{\J%*W8> \`+L1b K/"-xpe$gY X-d>{UbuWփt%-`\r^ѳڧ^WRWG+<ʾ"ƔfNywZqE*oWs[Y;X#"()  v8,qIV*w\Jo\q2V&^{Wy SxӬ+p' dvmϘ]tWQƠs5$}Ǐ@uuMj@ؠ3q5 f[ӎX۴4F65jW}uBZSGӎ;ֺkE`F ĶAzVom=G>bFx&ItH2 :`rRl2w馺l+1F :ѿ;% E:SMؔJڽmNX'^dZ=oWg./ o~b9['o+oNCggWW&!L!4bx m##N?tɘ{^r 0;\bzUENLPWM([[ 3ȯ:N8O0J-(S)}^n"z%{+˝:8rD)+WZ8}A"`l1bΕ+RkX:F\:xQM1"F+V &w\JWqu24>Zb+52{UjUqu+$YbK;H-tճ)euI"Nbpr>k\J_qu>pE|y倸b@RkWRUgq sB;J. \Z2JK_]OV.{5801FE1q* W~|գ*@~M%C Ы)@#$Qf&pgOiKvo ,n`Wl'w%bvW[ Gh7(ϻa_\`Ծ\\S@!+W4fn`RXW,wꄅZ;X2Lj+]I"^{bS@S;U1*-@ճ鞡2pł)W$yS+V_b*W@ v弢ghJǖ+V[quZՇVy-tc+Ei㤫+Ι"`O빭pe5 :bPQ?yկBW>Db"֣W\X ]m9vA޽tu:t&`s]ը+EJQےIW ]xRq5k<m:zD4JĐH++\xO<* [ӥv,vpj h; B]G Z"--HeLiz2(EjKkWS^*Z'^^*Jofyy奵>"RR~5th_ S1L:ArV"j pŘЕ}mCi+v6^])VCW c+E95$]y9Ȋ ]R^BWL{\IW ]l\J.vWCW ׻Е{t(L:A+Ț\WCW WZ hJQ:tut%,ADWZ1Gtut"Ra=JJ\ ]m+E溫-NVƋ_]vDs;۳\\^jD#i[|[_x{]PO1ϗ ߮Ѵ;ۣ4}$}o?=!,C㟾o\.O߿Hy?m^qrss{u`?0B@y7ۦ,\/n?,|}'vi?n`>dOw7ڳט)7}kYqg;ī~RGGy" ;oeag5x]̔_̏@0?oЇw?|7|/y%lY2:q-{jP(!JGLRv?D!>] u$׽η |ݷ \mp9;2Ucw%7GSRζ(.p S61d SI|I/dK.LҝiDBRΙ r\J9;l#r}cH:uh'~ZhPBmc! L˜Dt:;l0b!ϽU93:.$dp8${jHJ7+b01whFKq(9| ذ]+.%rz[HXڊu!oLfL()#'߉s7f9r#bk*Jam D345$+Վ WKlBl1kW"̬{}m"ڥb(GtQƱu&fK `?_HY]ncH599j0H9(_+n@#>O$7eoJz(SAd*)S { >r=x\@:*E`[=LG*F Č@r#QP:澷|X)Dɔ,=%c##[$3e Ȩ&/<  jk}ɈT|["PZh4^DAbu:abCfM: 4pJ37g,Ā(2!EpО%C"XdGhϝP:K+43Y#e_hbJn x˱ < =J: `0OÅ]p}Yq ̥7X] Q 2BskB-K`1l ]˘ ݳ4ƺE?K+U^v]**Utؘ ȆR>sAEۊR6[ s r 8lk]C'7g\GZD*1+wOl$`6p(:\[dG2CYD&HP O@e#5H!6#,%']O J+9O;PcfH57]kiAe ¶oQ!: Ȅ%H` G`՚Jy^[Pb#t,,x$t0wq b`Lα֤ ̳p 1J@4`=N 2K p ZAQ{s2Y1PHq Ls$eEj ڳl";!JPA/u?BrAA*,RwQvTuAT"Fzm!c^FAsyKIF?5ΐ_.Ct@uQ"zJ(g 6C :b@$ k ҰMVzjE}ldHQB(,dPgJ>}% ^ܢC{ߋqiIzlpBrr|Z1fPTԣ*0t_l|~ 8`6meM~wk{U0\h3?z7(m3$00!{AyD *dDHC6*d`VPH@ q(`QBE򠷒!HDNkk+fʇX02X^(^$ JWld ԭHw 3hҗUSYߚ yMgێjʨzeA4D2X &>W!U;AdTNb f^b=b2"A]R%63ldL)B-pGQu $$6(Xt, mM1gT ]\ƨvl A z=bA I5UIՌ>#xX7Ga/p J%(J@2Xd9(I@̴d %Ԕ!P?Amj D,QTDmz<T&@g0H!(ƶDMqPkV;h,҃,&jdg Ԁʬf42#lS#k80)QC^"$-ҐQUr#al:)jUd*eTP2mMYڃ,2 `=f=-q >.`E^H"u)A:[H 9Ћ+aX 0ja\ZU u#/gu @JSlmqL+19]BR;5ATAd(PUvg#evT P)!~]#-lp<kP2ڰ^>@z7AR8*C=*mP zx5-k(-|j䋑Q+Bf{@S k|TD=-z5NQ!,uJi]0lAb~pQi 6zR ^.1RKEv,&fԤ> T$2urN V3f !KkbE9VX8D;n"  cR/]꧵vp6 :1&0I` IPt(Nc~C .T䌆Q?/pJgvgW9?[]奮K/|kҋ_6~ 0 zzi>߯nnq׷n_$_s}vK=«ߑ3^<"oB ߞ}//m[ޞm0~n 'AF6 fU4Mz% n)pT2uN Ħh:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:h:N (JX( ;  hQ5(f<@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ tN Р59878v'P4@'? MOi:޵5qٿNj 0xR;I%>dƕ#THV<5}I$RU6J׃_r%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr%'Pr $7t5]D=h3 \g8NP*.' $hk' _@ /'ҵ O t<*N@ȚJNJNJNJNJNJNJNJNJNJNJNJNJNJNJNJNJNJNJNJNp=]Z^v(?q)\^OO@}aRp D C%0ʺc\¡bwKVIƥ0.=.z]غXޛdÐ:x;.n~sȨ)q23Be \:پu 4?hixoål9ؾo{-V ̭{*|e{@M9\Z6#҃i4Ă@݊_|\d8BY?[EmͭA^6M)m,@Qu%R{o{Ѵ]\5=/\Jd-:LjκptvE7 ZۮMu/]+dg V, ҕPq!Bwם vB t ?:.~%q1 ]!\BWpvB%:ARWvT3tp ]Z%OsWHWO]BNu.睡+DZOW2}?I¡ܕ1}U "ZkNW128:8jn6b%B% }wqw7qCf2LuaGj#ŊIcW!M_gϭ)_wu^}ݕQh1d̎rQ;M{>{(=iIB.ΖqV74nq $60@Wg40;T֢W-{*Xֹ+{ 7)VzAcuRZf5G7e2GhJPGHx\n޻<M8};[2hoֶۣVU-mg^>?(!+}5;:v(IZs $] ;S>YvhUq!DW]+f+4{yhLB'AΠ*ψUʑZVLǷbt{.Cjh5w "J$D& RGr;RpKhw*BT;L3/Yܨȸ!DNd4qY(Ȋ๔feLT-U;/db>ϡFN8$LE(v| nb1zI%㔝fptk> ;vԍΏ; :ҚPUTm]7tUXM}MqKgK=vX=6ߖE'$1Z+Mfl!]?wN, VT9ɳZ\{KH?BY쿌~;QUVAK /d]rrg/\?C{><t_Wz\##}ѣH^ TR,Y $\c+?Ņ3cT rsW+ +(Fy ࣙQ/E#YqㅋXip\ UǽP3fUΗSjIU(YD}xdֻŁMjӳ՗p9+ljpe n"y?+PgQ\4/Eu: eI)v{ <;ک w[_0 '&}YY 3%.jHML5rieIszջT].{\Y> xUj(ܯg;)E-r6zp3WSF^ "H9E tQ0% SpQRl(*ӎۆLŔ}e S[\ Ul̻z\X|{*{S;'»û!K{ZȨg*Y ͅDqKa9\'pq#q9VN̮Xpnx(Z'=0:9ΈɃ rJ&򂮫uIIq%ݻ I3P=:9/n:2o@"yD:!*S` EqKt܏Q e޼7R0fyu'Oooyq3ǖca_%9 w˘FpCE)}D^- *B 'nӌnNMVM~Xz*5LL-si9cWUW=>sL^iѣB#>j\eXWP֕|*><ĔB@bKf*3RA؃*H_(fiʼn4i0fzz )huk*3r9+sFͲA*YչS :)Em 0TYثG=̽yYgּ"uYύv&y #yX-X_[/C?HRuR=/U jWW5v_WDXZ̺5br"y!HeNL &Z>3]zHGmEUL-L{y/tg*Iٌ7> #G0"Q撱f fBa1έ{՝.d =W: Hm0.hb*!;sj% s8۳uF)v4?"%J%!Bsӊi%WTP-}5RKÝ՚5_OZZéQqygc kAs;]QL+&Di§T1> I#ek 2h F`ғ/zZKqiw}+Rʵ볎[#UiZg2f4t{i3jYl*0e8R,P6PKR@ZfKF r*3ϥeϴ\taLg,5j Q: %%WKm$׳F%cL8.j AheV|I䌔VYq7e4J=wh`e;ܡ)JhfrQ;#pӄp_tpEg9խD&-;Es;DWX& \~6Q IP]v]+DkZoZ8UJYJ֎6k=VUG nRlJo@W:Eo e&0krwpUbK ~q3ff޽qOfkiؠ{5å 4ؾo{-Vor4ڮ[WlS@x{s Ksh-IJ5;pqV,~XK(p[IcemÓ],.{T]*qI+VG5үQ1B~ *V4: 2N,4n@wE7Zy,of(MuWʾCt+kXW :vE&:A ]euLwm+DT#]!`BNtt(Itut4!Jwg0p3wh5m+D67wW+ms] jØV+kmW r.NW1t]!]!\gBW0̦NW1dMtu:t]`hg ]!Z 2M$]AK4+:3DƴM+fћs]sak~q8L~6_EMNiX(hg v;;TG VR^*G>kq^JunRJ/t|;(6H׻'rT߰q͗݋\K J9e :s)%k(.aKvU F1)h (V ŇMBޢ;**c$2Y#⹗<7XJeA s=*1!QaTrft3-mXfp7Ck4OʶE74Lm[Jx 3A;CWWլt( Ktutŭus+wn͍**dϡݙ٭ʞlerQ$5vb[%;tSe,=DURu`7@+R+H2W#ĕ b+,%U HP{Ʃt!j!();W$7h.Bg+Rr28F\Y!( W$Wy."&dT<>F\9c+c+Z:HW#ĕ>xNC^x.\\'ԆǮP%q5B\;_@BJH6"ff H׃+~O]E 8v`\ũj(Õϸzm0dh=W-p./-Jv[^"DtN+{lyIvx +l;nqjM=n aq 0 lpEr+RU"Fe\WJ*8EW(XKW$W)."CsbxpP|pEr AdR듏HeqedH<8^qQTq5B\Y@sG/qxsZWVQe)b+|*H&[ 5J[xp#NZFk2Z|tE*S%Ep2 |In`33j-@"2= =M! x7FaЫz1\ũ]\'F\N*l9Z*\M6wBA=qY)oQ,{,S"h)m2"`Eg&ByRJi-A/n'0N%8H&#ZpHgrҙɍ0F \\ w{\J)3F+p >P|pEr+R;q*˸! 8%$ 6j岉HTq5B\TS@r+RkHϸ#ZJF"PlIL~Tjȸ! FB\\pjm5jNN$}ZgZ|tE*m#3qD(6:M5*u\J'2\FǛ-k{-\-1I*T\U*[2\Ս1ejrfD 2tBb8t4P7eelpBnJмD;2MKOgtquFhbqrҾh\}X~~>?O|>b:nޫm}eB}j:|ohTU]]>Te! |=2TOlz38EfqjABH?t[6nO2l4)08?PAlh7x bN`εKӤRɌbZzg+e+k+RT"!jRAd+l=\\o *0T*q5B\@#\`+:W6q*cvcSH fV8BHWRe\W6g9 {W$׳hkpP;mq5\y pj'O2r\pEjOWҊ ;S#4#\`WւMW22|E= Ekc亁q Ʃ B ɦw%x I\cBwn̆[igY|͖esS9eqޫT^TԧuMoHy@=i8>tQ&G֋K&`grdFIp0 vҰU HPzq*-d\WJb_% KlpEr+R;ɋq*W#ĕ*(WZj|H\pEjwyBqH k$OtjJ~Tȸ!UNZF".%&u\ʜ W:#\`+%\ܡFSk u\J19 ;W$O2jLWR;m`+|prЧuF5N+T=#j<pEb+$6$_w/WU[,1œgrYk߶v6=͵Nx3yE,ȖexPDNj[z|\9l|Z=l|JhTDFrFt+g^'pO`*<)#LӨVKӤRA1-=M1•%\"Zs:HW#ĕ§IqJQ Hn+R뒏He#j9Nzf0N~Wqj@{ELn RaqH'+\IkU P1 UڜWt=%K-)AxwoL'%֯q._-W'K [?^\ fz?n/v=a[˪NE3?i*q8y]VG_`$_Mn=/d~8,f?[3ݟbʽbl&3B'溙œ(>|\Uz+鶓`߾צeUw>tjWvxMYPnH)Tgܕ-tU.J2z;!~Fmg|Iǹ>Ô; 68gϽ// UژpLTu>l/{> [m[6=bWp\['^x5w] !ߡgˣYlnggyxOV+ߞo) rU}7w5i ꎶmT ށbGnxmDUYǓqAYZ9>@ڛjtN>tMcDаx)N^,NN;G^/!Ղ ~̫d3^e*k<[뢽N޿&nq[h jqz8[}iݏ}~lK\{}_8extώl3|Ӂu*rZFZ7u*̖Ŭv 02gE߶~;n c )Dh<-匇i`\K;Z,Wy:g; ~YVWNdQֻC?2"v+wnԶ.Ϗy~4YGdF'^o$_͊#ƛzk/v_g b3+Mڥr| TN va/97[0t3ot9[X;{ڰcw%M-7R~ˌ|f;ى>N9xF idggPDlVq| i t*r:klq#Ve;.*[حx.}O?뙿<`vz:dhc;`ƵˇCkswIxei)Y|jcWVzhss`M,`.iW-CMi͔54,r>ҧh6= DR*n8~fZ[` [:04Wi?XNW7ps^u>"pMg~Ou_AcPZ8Fdclt0)r57ֲJYU)n@Kޫ;de2pNK:,{K4;ɞl;5wqȀ{m5W{_rrOOŭ~tETU]]>Te! |Ped' wzj0l2';:%,(,,QƸN^Bԣm%c3aY'drLuO,ёC%Mz[g7M:]2=ҬӞk >,c#;.RP=Yn`L/f=}mL;f =r\jѐtz>tmc@5AnM#:c#{,lz.4q9z.:]35œu>ϛ|rz.:ä+dSm \OnɓlH?{W8 /{>,X`} %%NĖeɦLL,Uh̵TL owz98Go:=7u; d)sqdb 3y.ߊ.:c[M8I\LTHԌMb MӘCh3h+nJ#X2,!L?`;(gc"L !wyKb$#lkdE/E@ҲlK q$4X1ݢj(`)r]A6JQ7;C_/@ت67Mȃ@T`B1YLhP*4:BZ cTQ ,*H/o!xc _Տf>Z>zH. +7'a֮V'Ye8"cMdigJ%"!}*WK c.h%br'kɭ۲vZV@\k ByN*I)7ܸѰ,ozM"_ג^;PpyS~cYe<|__OQ_˳8TE|<>8+/@薷g2Bp|W>:/QN0ܲǒ~)v߲*Sr ԭa(al "K8%,\ [@Y#:9ezژ:z "L@q.089i#òNuNvȦDj#Wl2@3-7tp˃(4(.CH8+ֳإa RV476_'a_Ղ%M? 5IkXwZ^6^rҕWl];0sUfZƔY6NN0 0hY@{]~\zL\*ad<b5gD@ؿs(cHLyx2'yDuAp62nJjm8XFĦeAIKʿ\-ӷøgi_ڤ%]E6-,8k3c}bP=9'49W6Vi`>z|Dɦr3dqq,徦aűpABǫ 9;JuՙG0" _Y̓DQGBkYҨK[f)Ir(KѪYzg4j7?.If*Z`)uB=Wy6#_\#9m,Һ/y,E4xl"A0D+a&,vh/,- z2M-P~/̖^S+AyUZi~E-DymipO? b~rf oG  hE θJ$Bgy2ݰj:rf|9oHϜ,Ht>XO^6_fibɆ9 y`6>/hFMdBY(2 `*R119ؐzRb|,] i%J7_Pw|mޏ+"Cd8Ѓ6q=x$E!TK GΘQ!U^ *S!mZ2뽅=hR2Rc(5zO UYb4zSHWדU@Ǹ?%0U.iz SHc,=ŌP3PIh vR9>\.?vA?Ey 9i}h+RF'F,kp2uEy'Lvu1O9ޝ#!Bqz|y>櫩f <%'wϊu +Nse;9pt֕_ى$Ԁ<[-UE*֎qF/+yc/p=u]mO@ Z dVy]6G "4iX6w+xۣW6?Rp=79I Ct<ԒQͻκw"@a"@BL53SQ K,z.JV?~m*F0_?"?0Vc{cuןsEl7Ζ WEm2AW~eΧ3v>{0T:1hQ uqIH(`xARQ%u u-xSgMkU# o%.,%wjaf,V*A>Dm.C@@p++S}rԇ ҝC:Tҟ:ܵ]1IB#V SY.cl:;KYo@I:}Gʦ U͸bٛi?ާ=l :-\Qv/(;rq  @A;v`7D:/2XCuY;~яM)0B!F0w,JF<JoK $!T}mGxq`0yĪEhB.(NfQlv7`FX"iDhcզnw-rIOK3<.Fn6377;czT)-G=KTRu!LNss6hƳVNˆ9k)isX[s_"2G>:(XhUٺQ9zg1W-s)!56cE4PxL}į#jT{3.1 ʄYij9{{wų-xfzk3ޤeAc=1-}[Eɭ²5*, JiǞVfAy3'2tBcc!SeB"4R.o eZc} M EwZ3u1 `-n#' >cV Bb&Kc1;6JH +}h`Nnw| m I JN!c|5{y{zHVzM=S*%J\1LByG=vQ4'?LP6{s 4we(=L2F0Ұ 8W'6 2{Y:qCWb٧1J hz H@E n7)(_b͔ޡE]y@XKŊx  VVм~{ 3s6~煟k#|•o |0Q1ͯ[*\>)g8/G s˨dJ %?)paK[٪K/ ƄD;KBx򼳒39:V*ow*<FrF:? ޤÜM{a7 c2D ?ʷwSg.f=oQʣ1lABīt6<}*2$;N9_wYrj5s2U޵6rc"ew1`KMz}LQ$l%-vߗd$*%Y Hl(sx }D APS7ˉ8#v'O'D taZC48rɱeEp|w;<<&fk>f5|0N:Th]}"#i;1-wƓ&RчgG Yk)>;UȻ۴B y*ר\4hCr:r!;9H}cPwbYsR^Y8/ wdN,qg}9T:sT59'GCI 4h:.liZ vT蹂óMyʣCqiFJ(Р@Z2\:a;xڻ}n ;sQy=?(`uWCkA&Ag޻ ͮJ&:ukFQ=,vH(o`}` UqC G!h|448{y(j˾k8B<_G5[1;,;^wEKICbox*}bܷW[]"y!50`ZPrsxS9\YK˛(@k 8,QG6YU@`ۻEp7WT9M:ݵW']o0F1︔eR㯿Cr; 'R*Д9!$8sei}% mEJ}ͬh?͝3_ObAPM ZA!$<nI1gNƮ$%VQon#@htzw8Gê4?̨|s sOYOy2gT,JS Q##/^ aJRj3$㶯j#u@;nc cCY!Ê,c2_d(RvvJ滂ޟi,u|ci=u xmw m qD ^{EξR !$|Gk3ܨCz5 SWNiqr[QY_@;8T&UKXFDPkI&@*sM uwwkSl/ƭC͇oeD>k*^ Kcw0H륫 rS![ƵHi$RؠY.g+LPk7ee_ W a,LY+)1C'] W g-(V[aB5ڏ&n8F{~Qx0;ê{Q*U;76{WUώ5*&N>~ {KED6H76j3GFg {`?9@/ lTwDu(npA8v'zfmʙ"&sb8*e1w:^aNkVc"`\pLRl֗:Ocv6`Xk ә2#9j ^1rXc]au7}L$m8v1Y<ۍ&7ۖN.'3}ۛr6AYG=[z'[2ԇ=O_@C{>-EV$߬uFa#r !A Ȟw$19}lXcq:=#?&0HScEfU 2Ks֝=a}⠇voEޔsgsOVJ0=E"'}!s6#WsR#݄ .Kۃ"aW6MOBRog̽g+t7=Q '( R<^7Cs0S5/^0a sZEE)v3`TFQGi?("קjePRre}Dٔ>8Tj«5峅.`QwxgCs0ؗ5&wՙTgRw, ݇i95hے¹}bKIs~}SĘ[oq4$ȥ-Wa(9K}H2AG+$HȌ!X ܴktyԦ aB5CYN[(L7]Amʝ&_/c0)J) FVQZy~;ظTNh~?Xv<\4x&*j(&QcB{ nʨ;- >'ֆtH$O<~3s@PFs@<8hnOSߎ5*ӔR : +X~!ZnG"ᐫԦh?.1 K6Ȉ{QTH0}([p 532p*ٴEo 6k97ޚrj+4(ѶϨC0Tmag@(F+V &*$m;-}4eBHOgר]K\+&g[FZ`pn & `Up*Teگ`-WQ[%?sa%tFq1 #NHl%^=Xo~v&*`cԥ6ER.wp׆;t$Ca}Kĉ8(F;rSŔU53p3&6^4^XNS0;B ~Esd %2X߾w͆pzJt qTjv=ڞdqwm-~4ӱxnsH,y4yfqi7Kgphb&񎮄Ӆ\hwY g!q(سv~GrƅL8 E%KjnTLʗofe.i4*!,'pWJ\YleܿcQG0%'.v]-7j w7lQ0fHκG!ˉ$gbKe; "Ymb}tb"2hU9~72ceǻ|jXsGZQeQOD bC^nɭ942N$|΄P仛N+MR G(mGA4r3oix.Z4̇2cK4VNוA .TYUoa,ܙ^O Xo_d@Ih%>̙5H(Q(d H-`P9Yn[ZWW`;1OJր\#P' ;0Sp ɻb8;x ذF\Q(TV`~ـB(d~X;X_5벶p~=S6JR` F,e9y˧+U7o:%15guM7=2e'y;"r%Pw|پ8dnq2wsDG.}DG.ݨUcV,5,椴tzƶĔ@Q #kPPx{v"P$LSlI3f Lr¾ W}cߺ˔ \ \փuEJ cKK'IK!!)ZB(!Lʕ)sv7^,GaC$V OF=G/.}G/.}ˢ(!Щ(#-[AUؒW%PiAi&ZCe*mHIP,]=S$ۛq14dɳ̱K{r:~B?S2ضWフ<&,!i\*%y|"jAuhVd;ZtBo{b; #LagGCh8L^{J?"' ၦ%B jCpBr+N=$*j5B SPZSRB- ypE= smEnyˆ8ݴɂY7AeDm>L;6f&1l;8S#xpXk%vqa! 'hjݏF`i7o"g "G` +Kqr|7pض7:vԓ8[LrrLt.=@ݱƙeXjc&wX}y E:K?Z?-hv׵|rkA|L>TNù{FGB^O?92e!*2_ş0K$_Qs1`gi<,~Bb:=;36<+cEngs3?Xrtk:Z-s /,m0qcp8|pܙ`yrZ=t^r%ع?Ι>#MHeI92Ie>"i2M$vTL~<P"jL,+Υ.EkE RaxAt}sП;%$IM. ԇAI ^uZHހEO]=}i(Jkp}5Vk&%`˲Kš#TaQ -{TEBl!ʷ%(ۮ,5^,-m9MX؇U2' [Y;+ b8(p2J*t JY C8g>㰷1s gooж)tJ.JO$R%{ we$4^90 S-5u4ɖzHJdb"aRgq #sAƊ ]l>|B,C/? GEE18|d5,B ]tw^P.y9=2׏Œ[AV  ea'i'up'.+IS&_ [9`ˁn(UGZ%W XkN.il"إG /W Vd5`9`=xtZ !,xP!KaɬZ)]3T~5,X`4Jyu_Qegvs}ÄBQ61`S- VE"-`3Zd:+FOB; ,G:͐N9O(:4ąmX͗AIb:)W iuӧ68\8c-FokpqTq DMstRؐ/6 ?'Dpmؘ!ӳSṢHFKN=C 2ޒlglZ2uRGmҵjN,;מ-TY3jC1N)MA'ēGw(iNK-7`8 f'ecC0$B28VP5ŶfRC쨭E Hso}KUbZD/u%@nG6EsGN[Ko(;i*,:;CImCvZdQ`"/t ǟ-JښzMq&j:!+F;^*6t6npd>:KDui35~nړd;Dp(Y=B9M0BN.j_`s7l'Er8*@ErՐQY|u٥DJi!|:{e|I+P5W;uҐ$!>fU1s{lZmPmFYBt(ǫo<]2_gmo}[~,-`&vEKͥ3TR/԰ F7ň#6w't*J 2ΣUkr0tn`m-M= 2)lQ!pU,8VP R9ܗR)i'n`|<` [N(KQp.|(9Ņ& @W+eM|Oaҍ-}X k I:- 㣩E_!ۧ 2BNS1y µfBG8~vh7hvQ~ aw\5!Oʨ$^c3 'PdH #OA7T+ՠ,d3DcX"iO)Ioʁ1^~̞U2UIH ,{3$Kn;7I8bk}3]w 2cB2NQY3ƀ)A\j{}.u$'kE#~&moOQ:`y|#\|ѨCٚ&f<)r 2F;GLBa[Qk45jH| O[4 g4fXbEv5U3x6/΋p6Xpv|ރRԠCQaN$jI1Z%kU3GJ!if7p'm]Г1C~^AyPXm-= q907&1ˏkg#'q@6YG%pX c/ќ{#B3-o%rCt_5@hEWcEF%NV-.1crsb8 oH&*Ht^MreuMVI[آ`?݂*I0Bt9!ֱ!@$xA=WۈЯ2V}2>C6R4UָwFl7}nQR\Ӳ~Ey E^'U._y4N-vXt;nHMd4(&A-o,5Ӻ0Mz\=,KJjMM%Nig5yo' ^"d>,`ޔO&VebiuhЛʤ84btsKKv198vZDqcrT9 D+稄3y,l$0"8 Y]|X sΤW˷4QhF76)H91)Ke>)}m8lU,(&PgT'i/9'p?{ī|?yapY%%|ciFaQ#ňg贠α׾hAހ7X=s"$h$4Bp`` vq:a.q24m.6ነ!ʔ_TƧJcf ͠H#eȜnS/$u}::A# >LnN "2\xAB\*: BZ? # ӱXnذ&*JY|yrmSpCN`0է?WdŐ7Ѝ-UHB/I {U.i*R4 3Qp&ǩJ)nH?xkL"iOwR5-umy(&Hks` V$b! 4DV$Jr},g>8vj= U9b}iRܮk J;zmO>oү7wOf_fY nr5#zU}HCp"t)#!Jރ+jآb Z\ d@ո@K٬fKxϾ-*j*Z$+c5SQb⨇*KqarZo>m`;`kA`lZLBck ŚwO\\ɿ<Pa˶mNSܚiy86Kh4Xm@D zHE%{un~[sHyo@K4Sc]~-m8hnFT5#Loos!icE9~D܍j$+<_Y~իUZo.9W̙t0._ S`g CB'P{V1~/:yQ )0U\ 9!Lq!7۾m IaMD4~=p4wIc޶ %42\WHtKUOn⽡mO{oʉcx$dNpY|;Dm_ ?*cdvPCKixՙyy:@ϓrq5-( B~5wtZQYs²zU+kb";Uea)XV!\c=_`)L~A9_Ɵ 4⢆VjFD˭=g'ע錞l+UiJʢxVDT&lMYIIӳjdUj>lj<.Nވ^ \˾;E~S!dY<KH/+mڀ kҨ|wD*v "+)ji,^|mf=aT@ uj>-3Y26 e NƃJ†ʴ$adM`Dq?7HM0MdEy%Z4vK6@j{6T 2DGuhu>Qћp[$,څ с}oYt58MJ%yvC kyXQ >:J>Ѥ.YqQS/)FDp(QF%@iPGBr;ɡ[qT_ .v1їC\G>>q/m.>I xA[T $C &Vjd<kpVE"5ȨiU MդDОUn1C)gMh R cܮͰ+ 5썷rdeu7 }9KTVFgE'>*#Xy.q v;, 3zT*}s򗄦/*}XGY$ʈK]MIo41nE J#f@*vr^B eݡ[4 (ﭦf|B _QU4)Q*&&m*yp3?i2+,ܽ{U'~6$*j-+}:=iŹQn̉dR=(#NFZQx랉s*l\(`oTJUΚ:Aо^%ع!$[_z A}ë!/J.(sZz9}rr\WU0U洨E#(fҊ($b"q$&Ĕ;iQUVWFs ʹB iO~(F[-fN:%N3dSy9HmwC]moH+9L[b8,`=`q7MҖ-[D;N߯%JHʶ|H"ґ&V2O.*e.h>T=rJd-V=:NqǛ6\評6r^; FI=ñ?tӳP99TWW} %Y! [hBXi.0\8[.ա}U ,AE [hnWP-BJA(@&+U ԤJmY0ŅxT Tk/'+)ގǴ 1Xa$q hՂ.ez T0&S6Bzi-aQkV󶼕C5mg\Y\F2^Tn ıF`"ejZ#mXGө `^F643Aiae /TW1q;ǰ+[6f/3)cS6,t1mOePF-]2_F@C]ʨ^H7wuo &jds7w[i\8:)1bڬY6IFDףm%u=aa_EQPUacB?ɡ&j)+LDqQ Dz4@HPBV>L<[_c X8FF"VrU^kU}{RgUP RµOED{ͦy>ǕK0Bjm-4y?afwtKw skO.񡜽BR FĂaI6r%PD]Nkc9~qNc@Ŧ!#DMӢ,CT/uTe O\)%[h\l?cdļW^A8[Oj tR1khZmא&߃0oW=[[CڣB8 ncc^kL>aXL؄+mR1IђS9sNcMnl<`{D(.nAB(n{D_;(Qe<H[7{i@tZ8o-P {7@rJ 8~tFw͸]q}#oYԟy1d8}] K4N:ꖭA-jEO DZLI1mXa*^$ʲGV8-qU*eGDBWNzeôǵJPkiSDlZ|? KeRk$oC+݈Z)z )IW`(S!54! WeGpx:4jOrݖbT&KȌLB\4MC#9o^:FE 35`s7j=.kZXT+,?*miZxAakTW?iܩ#Fx}At` Fq4Z nj9RU(*Cu;]5A1Hקn.AiFTT+7ͥT_vKXjfW fY?2LAp ;{hxQ0{y6Il gtz+[7or Ό|7Y6pdsU{>x #0ct??S׋,O/}\3jsr\HΙA=tlԐW:HdwIfs+se#wL\z&_Տ? wR`0⣇K1 .q K_ѐxZt]iՕ3+fa|Ǭwϊ? (R. p,W9ϯ^61>__ފy3SHS67dq26JDM,F|%8}s{ XU[2;=Jl$u7)fwO U-+%؁-r+L$s^,X'”ng'[U4MZ :RG6A~[ d`!CLrҲ'o%cL1j!hFW0fD eA&reÞѷw6h- [͹LsV0SGU1oz߸q1o%9{&_yrtD fQB LBZP{>t%OndӧևWUY U[o:t Fnu@ W nnxK}OK!XJ%2 |k&OuވJLoK6/fMm Sj!W>{9H'ҼHNmPrܥVCCC@fh֒qץútXMT̈=l\{0@؅S{G\D`&rI:{oq\4Yn?40P5pQ`dġ4)o&X4f5K yUGzUyyOvwP!"|*=xa[B*|?K,)cFqlarзorZ1 m청Eju#i!N;Ћ(Zj. R(4p?~H;Qk<7uPAh w1vRQHɭf( gbwkZdNX:6[1&[B7&.ĪEu$%nqY_z:mƓy|lL w1rX(a!#",{,BH8nqQC/: >3re4_%М×,_Z:Lj&BB\.>Og2bz b%tH5BJYܕ]ɮ콕mQ`Z&~(,5) ۹eߋ(>z$2SyL$#t*19VJ%]Qs`2*1JN\ը6X+BѾ:Ik\0YsJD{X9}Su/y-'ϯjt/&pEdF㋇a4x/En28DbHIS"e*P%h,Hjm7 `'om>;bfz ~ Gf^}·8>Vǽ$9o7k?I o)y=JIGC@؜P{z?p p$UX6V/nqdޕx_*gH@ʼtfӱ%UD8'š5.y桐<(N""!p#F/͕v\#iRKHs_g!YW![CB KL\!dqZ㙹ͤPD6 8<6^m\˕▰'f rԖҟ:*.0M*<פ4 J':EVauRaRc;88]i52#IYމr&9#;uyr&Q]睔*O)DP) ;uyLaf_n;ʆube5vOV4Ƭqa AOaӊ4ք*ɖw*dX:d(ͩO=a5T*J LMs.+ڂZ7SmG|dm$qaLc\!8_foO>tp?^'qn,q *M%1j ։t`1XkJ "CH+iʬT4r t$6ȨȞA`ߒLzf¬OzC9?t<RR:EYx6[_gxvƺ|O6g~>h3 ߮` kW2g.H4`aJrba(ԲDb%&"8ULxLPo>U^l %i$S`dJ"%IJG4#E B}buÀ'MiB/%S l:K@.yP.!,Y,hj;̄ Rp`wEIi, K X2Xs< Pf vtEOg3pYM b7TkC_j~h/3_$B/aIX |a(QyPmL(O~6n_y{$%xY6^S9sֿN0g찗qorwǐ +$(/kqs]{6$ !vpH$Dz$NjW=|hHH3%G86!]gg&vc藇c҃8W#$ 'rH$h|"14ߋbj :J;.$b<%15[vecg0Cр}t&Mm.ȐY.#C֥N/EJk9'*^}޺!)k3J;܌G]-.b #K'&栙54(c/Nl8gLdB-ENL:IcNseέn]Ld4P.[t"H+ P-5x't3l. Spu) \(aYcEAO3)?p\B4-KFXR.bBP$\선kwRn,lٴ-Y{"yq*a0J6})ƫjrm )#JJ(tvhgQR WZ*r_P64 q5*>1dj_ү:J`(PXĈG#\^nN͵= Ε"/o ~Atb]cW$9$ ZU[F3=|| {;.v˲odgtťQkJلS-KaɬZ)(uW>V) 0JuxoUѻX\Jjk"B4_#LRzmAiZP)w0ݘ]`;<0 JV i[<u2J?x_>̓Lz/Q-a%daNvD_r װ|Q@I h~}^v4j箑% 9j0d$qrlcp6rùjW죪\fi JR!4lH_qCjD _o(^5o(޿DnZbT)5:GhV9(~a'+eV G&\=yFeD(".4lz=O@~W0t|=Y#`ivp2_OVa~_́&|?9ܞc4wx퍟o hi'c\=F6ع}*Z#>]e2D_qQʀߝt\J{((*]t F^\X?%;+JCFVPzJ{TC= *ux= W39Ҟ梏*=t lwLy7A\.)t/6o ΙmC$Izjl`hbEą1{1mS%+jML '.Bh1M8OqKM<6KY)H!$S֒2Ӌ}LO;l")٫WK`^Mc^M)`, ^Y%߄Co d4o 4PMy(A ^Qq)|?N>dzs?|J1N&q DH'Ȓ(A50Vbmԋ$74W%l|ut W)7{Tl#VEFs59Uhkh(RH7ؕ{zfg0瓋=R ?#K)lJICԙ㸢`̴[CPeZ$zqYB#jjiaf?CQ\ ClXTe*`!ӳF0clAy50-HFsIv~*'IeSrp+(I%(x-als(9Ņ&i92ޣזۓP;6ԽkSڜPjj`ST"LWfoQ |Kfzt^tʕ5CC<+ QSỸyׯSZ6zR*:PPS&+qx4o?`3F!Q8IR`LaPҒpb&6AGDhݳ^AO_NdG΍ȦwhkI xpM:48 fϟDZhǡ?℈y{ gcfF6&wUʌ%.y55IGl((ĩиoOcYNZ #,"*&TcōEsc2^KihKO@۵z/roxq\sk4AƟaj մg}B[XW:"&@qVAB"Aj`P<2!nHU:hy3df ޔ՞(׽Ri*_FڐhƆ.@Y26|k)dMi@e̦I[ O h҆C9ШKĪ`>ҵڛϮrVx`k9}\0˨+NP9}M6muTúeQ=MlQߏrl(5dT9}Wg;<[FRiI? [:dSmjXP3n(C6ՐMU-$7=Isy2=ɡ^\`>e;ah aɴAn xPd] X8ѻa%0F"e.DMbM@> < Ŕ}gO>mND T8@ٯ=B9M0BRl cjrY.Fh99|!`7Ƴm Ʊ껿gu"/QF8 QmA.;rvWYꫀn&*=%Xi;=lzWiQ].,M͊ńjb]Y(BHM襸NR̈y{Nc=4՗-2WB"F}L9: ;iQ 0>]*Fs ʹNˇ G%v`_M*_\ܴs 9)JJ=ƕ@j5Łv/aT#)B:N&h83&>xU] ~ $)ŵ=&l ka @[1,@u U Jcm@A_3ZE^qp'ZOO-ݷ2&×b6$yb G lƆS ,R9a G_PNmw[Qv!0Td"\Umv+/,ak 9$aCx3@ʘhH(25M12 pMs[2رR*j)C\^1M4,׊b/X 8P^ZPBs [I=J2WTb$p/ԧSna3 ៗ(>:O &CHQkR9Z?̌2W.0t8 5Y5k#N`GVY48=b1lT䎇ųE) +T,SVS6+-/?=Zɉf.=V:>3?PZi#V8,^}M_Ǘp Ԧl'Tmu9.'Kb4+YOKx@Al丅3k0Q90-4TbycXysqq-5v[ǂOOtv]c$ut21q6e ?(@zu^\]FO '4: (ΫWPj?K%Bb!X g׿׿ 9ڈ^osM t ;VR?{ƭJo[c@U~>g&;K'@}+LI 5!9Q}6jm`R)b,ƐPQ A5U&XXRY[U qڅ ˒J,IiNx)K AX1dػeP"N%},6;*k,HE) hFsvY`Njyd6S Me?(x"ǃvSGw `ENC(BǚR;狟gwR嶍|w`BFOGJ |9)Qd򣃘 bz1zSYi?.JP-Ftgs݇yjJk])XL6t:sQMȔy %@{hR$b$@N )0tK;/=-,V_Q~ept'Mu[\ 겭 4zNs($OP^9el>~-d3F{6 6r*>sAO!T 8D<&QYt(TJoKJ.::c'&jBc#N9I4WUt Gy #\ .jƂ6Fժ˵gD{ MWUQ@+sQ>ZmD)yRBPz*sFȨH%loF+`ly%N"j7VTO>GPBUoz 2R/-kv^/ lR\+]r?*rki/o;_|v|tUUomŴG'9^C}e't.Dt̗<C rwA;U+Bk?F3a+՞VSȕ9!c-lcp'Ub'lS #t pTCu!cۼY٭oo\BV/\=ð(`ݬJ h$pH^oXA~na U;vlƺOWM TE:c;6X7L10pL7}|g4> Ui? /oUA54vd@P׋noh0-:K]cE]QV-C}wG1e6nY"Dd#<&Z;6֘V@SZ4?<`Z ~Iу{MNp##6N̍E͠M+zМwB#0Gs9Fv\tromܶ" ^:n[˖tco9k JҖiAW`i)cM#x n=@ubph&щ,Qye?6G4y!Ge ]Yd1[TrOAVFi;mT:uZq>q[+CHBO"!}O-삋^#&2b c'&хF6Bfn1S]3n9e<C*﹙vʹ7G#v(~U`r,ձh݇Iy+2\N;_7y1ם ^Y3] rO$C;{TjdpVpYm;eZWVaE)Y^v@j.MśjLjYw4eZq;EZ=Qp]d@Ӫ?ۃ-moPmQ mt(*]s.I1dtel0bt( Yso-{+p/X̻)OAdS0 Y ؘĊZSNd)vlYAc0ފ3]|_j8ۦY7dy+6va3zI9d KNX?PcRfQ*v9:=NxgŮx>ԇXY 9tGlJQ`E9>fKXm& O6Es8AbDz Oqr s?AM«9Fy%yٳ5yHa>RVMv[]g\~-(Ju?w֨ZSaZұԘmSj>hL1Q|3S?ϋ ~{wyheוzk[_OL?R,~XDf |txfLWdLQ]\|/Xpo3e z \=d'Y>֗[۞㛧~GD;̌Eg'9R#}vur YhYJw~9A@E@l;2c1M } -8ʚ/ YbO25+r>mG͸NeC F{לOth!u.mnMyܦd4hD9؂`fWHpWZ |jBD6Fib4Q(*sgm G5a궩rXnVX{f͕. nr0LJycGVqMiv#ȑ>k0L^%|*2MߊTİf <`;hcfR6uum~eg@r1uP6g(V?谗'//?l5@.S71Flfd7M^&*7ZFD7|1W\S2QvY\[㧵@n5VT@Hm~p6ko93`\4~M+THcgT7gs}R3kesn20rvf7\c[ Cԍ9<栍Q]$R!T&ӘCcf9sƢzÚČ`A=7yUkNFqI8^O8U/ޯvJ!&g~b.7<5zO~9K_'Ӓ?ojހOiocz򥻸<4l:{Ht2e{-^ewLH8tw>ICmQ'Bݪ9ȖѨ\*,۪^8u3-YnnmQIdپ\ylj?yQpQF>0ZY t1-|Dt}9)%LqD^>vXSG> 9` 1(vل!X X K 3YAgIC0 y$tCfG#w3G]FXE]9$ngoq?l`sz3;\ӟ?{׭/pHI ݻ@/ao1ƍhxόg:)Zh(GI$۝ _$ (e4$əq,ե&lNVq `1Yjm4"B/Q?YGS;Z&ӼpqD@PGm[Iro:]B<;|\sg=\}FmbpN|cOUp~|oZqφ*:jXtZduzփ:(XubdH,p6ۜ7pYLDdoR;IvjW>k:|' )NX'}eMWߘ"+x/'))[{BǕ`#X9(6.OGՅv($ ~<4-_éNW3=XQ8Pkڈw>0+?=QM0!\RAu].QΧ'xz6:%Aa-,b^ 6B`p5u =D"֦0YzEPPMr.-Kc_vjG(o]\1c۫ >o.߿SUtp܆ z'˓/N! p5|;,RE!EY7<%p%` H{BnC1ŌTƠ/wImcANBǜB`lj'J~G }8*hN)t[],ސGk"QnQ#)>ҳC-Yzl˝}9XjA!-Ո(2Cej2Px6Vڲ]d沠" T)BA֭p☚-;ljTr`>v)^y5Nz_qxo9IVsj^wkǬ7ןje?2-tzQ!ԸW]u'i$op9t!jˉ 9t9+j PIN|WT7+5Mk*ݡ.N:nRP ވ&'R`^ 愸_IȡSSJTrVAEq@.$,(|Lu:|RRq;BDWC}) N%4 ^-)N{$hB.Edi/$j8%{`3[- z>c}b@Ts'GmJcr6]ꉦi24~?"(ώZ[C9\o$YuUU9礴Bl"ma+N-$PޢN^$ά nV<k\Nݫ҃s-4 K&)M/vuvq>9˄]B =Y&rQ_85Fui4ᅄ}D'J@|"T?GťT$s鎕)>dzcm?Wf)Z.8fT̩uU|}uξW9&?_%t^y}yyK]BxvZ>u0N~~Yy_nmA7FTucMC_|}CbBT壯i\Nb Y:D%).!vW/S.^3Av$|-,F9@M ԙbay{T1GT6ֲ˞ȇ`c$Kqy6'w\$dbK½um%RY!c:ex_v{!I[fA /d !E3,qeь+f^^K~SQݮ vw-bW62;CX+u*;.ZS>累FQE/[Bd 7!$%T <*>m;;)LrfO~}tQ*)XUy. S `DJʯc6:A'G1[VwsrSzj*yG1t;((ahA=m$fbl UfN>:F[ S.|ݨ#VfDijFY4mbl$ɆOnRi'UdbL|ѧD{k kf_dFM rq xū-ZYPu{BU˞뷎F`k.g 9}ȏtm FQ&\0Ivh(,μ~vu-#HoA]l`Lw[0 wE9]x|>|Vgj1~w x61Dv> =Q{` ihv ӧ/;C ӳe\ywUWo~# 'N.OG.~m:E'eNO4o(OO~}}iL}keV3zf%/zI)uQ :@e'Q>;ӆrag~5zQذC^\öuݎ}ǥM}c-}lἦgiS<+[~2zz(J7 AֲzF&Ƒo`i"=Ͻ\|O{5o;1"yfޫ,goޫb{5:yfJ0*bK%{5w[N,k :⑵<"nsL s# [&E&."s$`n'PWN툽U( vb`1X;CfjLe޵ۀ9G`ǾL;S7}Ԏj70ς˻ Ϲe~ę>Sœ_{ {ίk8~ L]QYh2SH~+8ҺQI"=u߰b^V^ͩ*Zذ{(juJij. 4{ߴ᤬&RJXm-:$QB\iuˈlq{*KoErʾ<ɕŠ]yuͪ  5:s0Q!~s AfmHW/Q/{WUoOŬR1E .P. 㰞Lo{A:4ɏu[Yc-|Pb@=ZBf%\ub{]wǬ5|yӵݿ+ck? ^ g_V<~yTOe9?nrV.?0vc߷b3lnLsZV)㡵{t3`}mNoos8OlfZD>^X[v.ۻbC%HїZ=Ro/#_hz?Ģ8ilʟU$Nҏy9>z(}?^ _{ű}Jouu2=Vrr\э|8=X?E?ywrzdx[Z@1 ]ћ""㫤?OLka9"@/Ug!%dwrQYr=49Q')Z#im ݇) Tb3GR/vm^r@۷Fb|ke/wqvv^M&JԳȐg1)b M,G! =ؘkUcReԀMIF Up<6kSLځ đAnܽogmtlnª"۵O}fM^{q{-[y֞H'pQ렇Wڝ~v,{j؎@ uЫ=F˸`v׹Ttq8bQV\G\pjRf  FqO OwVb]o$7Wy;hC{npЧc=qۓնj PJ$EGx1c'2}>}rBIFom!5"ER:KXgպarE23K5$J[DllrKRXIɤh`%-7cK\\V )~~,&T^iJ9q΂X`bls"d9zT\)\穢.ےb[/5I R'4E9;5l!{p1338Llh%PPlCEco m9iX\ܬ!l͙b |0[d*DE謜EP2(!V/iP n+鳱TCEvnS\if~DŽ fc=˴!'߿4*U,RVQ(U@I+MEBMDddYSKFJaa׬F5F5UG15ZTjVe:Jt`4Vdmkj3 hV]/twj]e!,+kTdi"Z.(|V+TѠiXPgU)sdR3&).xds_S "6RoBk[Wu}˖,%K!%d@°Z/T<(FGeV6NRBN#6/8bĚnTήFYunrQNP EP E!4{(ܦ;{]MM؈It PњV aQ MeJ$?քZb,JR/W/!iqAfII%h =Oϸ?>C^O/R --YV*;5ȒV 1FӊT+a}(/97-պ ^7y,ױ eٞ84>y `$(+baQ6:aTԑcSlUT(Ŕ$[!j42T D]]Q^-(K<׶Օ)툳.ya"Y^S1>N0>"ܟ^%vm×65eũrS:6em1͒ߓw,x:-s/cHhdU.94W{hI}gmEXNK?/@#'_]AGRjW&_zWӹKFK)yhM/=w>SrVqIM]K~jei ya%c*%XD ňXUeo]y5JuCkwE]] AE?E><Ƣμ-h%p|z-떙2d/,j)lpۅv'*;Na>gPIrCʼ89duΎGx59^k +7`%>9;+rh 'D3unrGPX[ {^1j'*rPʿK;=&wPN.e/hZ*zWH~-7H𤻶`K[>Lv|A'vODdz~i1z )?0p~sV=IJ%=6 {lpXQh!sov,fN6=Ĝ gZrc 7ok"w݊责[kΝӈ$Msjԧ[Cŵ9#|_?8N>u"Fr+9;8o7˖.υ"pgӳrdo>~jxgOz5J8a@Av6^DmPW\~_۔3vVwznSK ]%<>~T wV7!Á~< ngΎYSNc؉bj8[Py-LcX23Nްeuۤڌ'DE8eѹg}mVɓ:{dC2FN-κ;;=4\ bњgvy/ Oa6}A,6^f(JxKZ!x+Q73Z9;|Ow˚]3_Ӎb޿vAC+fҥdA8 IfF{erȥ,@)1[&SwNiSd+]rf U0,|4$y6O0r#@`j4'>zpOX._zjh$#F|5EcRC$0jjŒ6:(Mֻ҅UV]pApTtn̞$NHߠnwQKΐWr|;h)p,xzSWڗa(S.ݥMl~@PT @}i Ғk"u/+SCC =MUjsJb[7a2Ǭ?/gؔJEH(I@YĚP2KZtL4N5Jq3$:-+QPZ:ms Ơk}m?o+\5Y lmخn+mO$;W^wd!,k]Y/lIEk(u"tVN"(Lhm! УI9Ag%bZ;N2Zޗ cT4^R-i<{V-wwZ ;z%>+^+^W0MA_8 {y C/G>+bֲ'nhCddW bѼxcܼxy5#x$n3bCp6}YW!cBɲFE5e~-|V(v=d  "8TY!UC Ũ 朑άO+m#IEv1{:?E8ozPd*egqjkkm^p) >k[V˯f*!Y֋V<:ưY\RLȞ2d Zy62/u{TWaTw'"13Si*'x}]r6#[m%O:2ځy*@V9c֧ׄ&k8oh.یx?q:$E~p2[0V4k,AhBcvdzdhH+$AOaCF>">ؑpb=V|M1!!_wur̍n-tb'q O~j[êV0듷c#Wml;ܚ7ӌ^;t23ƚlN١J1;v ; y a"`qap0&O '*bÄ T@Y;Ǡ:9u̒" 1d:'$`{Xη{Â=aIFɱ^-Ɉ$-&W/LlYk..]ِ̻X4%z;B^,ɶu0l!8A䓈;kK1z짶h+T$axI/ӇcvqO @MGg|fZ>{ wKB_$yޠ=)Ñ[[~vF$3IɂNfmb> svSE-4I!ҭ9>gYa950*' :h FlnԶw`Ǖś* WMv^gw鰓K ".]RJб~D"v %RM\|_{+Usn;}<{Jom~3=mn0>y㮗<8edd( =c=!5Dzw FUqȮ o*~\nwzɓ*Ȃ54ӒB_6h&j${fy`/+ޗ{wL!CmtT[fԿwgg7zagƓ~3~zJ5W܋TKy2̌gLTgצ oEF rg~ߊlu *`t-eAUJrd6DcOJt:R<{}_f/*^7zy ſXnv>3N囓<FbaOo.nj~>şE]gv~^S{QW&XB1oI ^tP !< /#-Q !f2L@ : !aڀo%OF&oDש5 FnVȕ e;y g:g,obL7PKvs1h}zH6%OLj:v:'ƘZnFtnDAxD ܛ`6Cռ2p6Zd `0o-{뎶&ׄ\&(` ~ovc&"ߞH oMmxZEoS}^Jwоo҈^/x2Dk&uiK5u- ׯ4n.Zm[h桄fox.*jhsKZhf3iL/jַauo \/y2>(\&0F I[֨r&j,yC恨̻wgYEg~>3>+~zGm|3oV}Nm1[rY>0]vo^z87$ŊgTՉ*Ы3b6 d o|}vHZ-SPFgiPtY-wv]-#R11AMyWNm'sI_P(Q7jgɲ8 #`4:b/"Es*%P2g 8rqJN㉪e~a6QDŽ!NKQˆEUr{ O_%Eϗ_1+Y* @]3k[4QcL.P lFf-QSL.9i]uT6>O1+T3+wz:'-qaPSP$D/ k9(iIAe/ 3 U+;] 5$)Sr!E-$0EZЃq=xՆL6d! !CƩђ:,̝Upʾ=5+1e䉤WV!2&0w*ezGk3Ejǫ\!@򘜸 62s5+0:IVfJJ,ٵ)LF"ƔFu5(SW("#1R 9dWNLtv< ;*jW,e<+DdE. /T@d:QEyw$PAp2 Uav#duE2CI&*ݎJeb3 )C (}G&>noNb =0]ѩ1,{ :#1-Lt>"t1JR`kfV=J:@(sK=?Pƨ<TcB-Śȸz>uT;,Фسapf )]vvի`]бȄ#H`4LSAF vfZF4l+2^a9zGjF\Pwؑ|vxs3fyw`zmۥLm%U le/ ? 0l1)#akjPb0[ ^SLƛu4Ѷ`Z fP wԀoi_~c|»'BYgޥݘ05D]s +$rbzIVlǕR"٘ Ї01+pqVu"|qlZ`Js aK6{fMﺏ)Cmmlg|o*~9Ӕ`4HFT#9`];xFvƎ@+;Zۙbqܦ,sM~{}~FaBKwO'_uAZ7Cy~ra>㏳ e9f; ,koI!SY'}(BT'Cv. Sj\F-΅ڹА~sJi]{?z_GBL@D 1Y ]O." ̲Ab81^qkn0f6:c)` y֑)k>€1T}\X:8R؁Ǯ9t0#t?&|VSRsUU'Oq.ycO ݊J H}LNh-$[>Zÿ9 Ma?@eŇ/v_ieFӜm8`nؿoSNJdAZdo;Z]ױ5^"Z-"{MkպL C-2R=Ow!43?3Đ5i{M@ponب 67޸cŬ+.`"_ڧ$BKBPE:r+nO/u0.펣|iWIDx5 <jE x8$i b7HJ5:1!|!+sL!#!Ǵ49>=,EK܋W qom3}q/lq.:GE\_?LfIja<#xgx~_>ui#W-dxybWo/O??d~: g_xث~qRjtK$#E<\>D٧q}]^O'1ɵ^2m1v[ 03]j6#zOpwZApO PuQA/4\]<; ޞ9h|w,?}V/ ܩe:-sU Xe/-4v'yĽ;Oo|gY=vRy:$~RwvxM*NhkWnC႙=\XzvU3_*.FИNbV'!]jWQwko&3on?PćM{zʺѭݟ$LF\Cd~GV7KƉ CJqv& =ʂT0BlfV_߄2̜7}ȽMio=^#0^|S浲kл_֓_E5$a-S`i0^ppw l܋T5jy(@;;ogk:Ewomp~= x&;Woo_6:dߒ7±O뇃wG /ceT:%E@:DP>-={O?Iw|.dIW3'#sjG_Hx14SK B6@k:0xF$t[{9BfK#Y;8xS},4Y/g#][Ƞײp% c:Z#̈cF9юB:@]/K3-*{`FhmjHE/LƤ(NƦd%9+1#a53qEh13-1fnh~H~3OW޼Iv{8d>}B!~g/_7l|Nx+1s~C4 \^UN&G/e17}X Tmd,w=U) e)cЛVh1PD73xā}e5^tx"v x"1ÁA6; -l(m/ZT2DQi[s RvO!{ʢmr~\TojB-oC%1rhפ4_$+ES V@:RgB.]c zC:-$D0Ɉ݋۝*3;s(|8+\ (ߡ' qA{,1ZϏc*ëM*QY^cs1Xk F%F*N.7Nvoh݃Ję|`Fd(3j7etF/GJfi)8ˡdr ;٭Cq)9cH7>6(d@MLsA+D&'e9+&J']d6研 1jd;&A/ XYaFZ(h(2 vEK^ %4NVdJjP9R d ry{WTD)^@Vh;`(B*Sgcuu"z|̠1%a~BFB%O Z,1lW >Vxq#A*w옢~\Tڙ,vd7"y9H5mWNV#LYWio -g^ W>^Vk5o7LjxS+"ҷZ ꎗ57%G?v/41ZcjZA`0{o8' fy 0mxMOԍ-irn lD*y[l")ߺ[8pxwF7+]|S5t{;Ueֵu?8if#pux͍ze 絶e7}lQvFvʌI^<`k#o-+m1[O cr|'$T TQZf\TN+1TW7Z?Nn`׈k ^b1Z+XJnĺQmaPDa΋Teev6Tc7485}v^f { Xvިc}6j7 stpR'kn5ݮS;'6{WVg rE{̝!=|:=fncgOʷ]s=u8n-P-RhE0KTAkM5h>c :L;( @CȂb\*B>+#X_ hhXJ}D1kW/>#DU'+)=U)"pVtX_OjDXS(r/ [0FbuPA$(E\ Vŕ +J~ks{1)CKH6:w%6zI8NQC_J*&dT^ 1 zolFB'E"D4ިl` 9BUEL(Y3$Y2\pue6񰙨ѫ Gk %*lנ-j֢f# y[8C-?E9+R-Ü55Tu<=v1m>1;+3׽^y5~pvA_͈u.NGOt韓ߞF单_ u`o, /PZl"]/ P /@ZVr;}U0f|WX<'\Nkv|CF[+arQyC52eC#kӮ47j "avN @>Q;Z%;-ۨgm'BD[cJ0ȘƮw]iZ#;c&ucC:ߓg?Xb洱yy~"ٝ3Q|3/L o)G)z{>m2ϟ jͽ#3ܛ}FrRLmǺUiJ|c4ѬX.!%}N /O-~/o|{N8zL< h`T^G p<m vjj|W;]jOl*6_+ !+-U .gݤO TTS v^AY8cR`Nv8 1&٨Q4m[(l8c,$M6%#Pu1Bʯ/R:v`B(8z|2y6r9S3Xw5%IygIa*#C[+oH7wehtHd;;H;مOS`!248ǮgvT *\/[/ֈdN`9xVA~ 1k #KfZxY֯Msf дF]4ɞ־vU,YZ'$؎Uڱ*݂>Lm2x'xsu-ShFm45m0O{g9Ѯy}p˦Y98-vy#<P)3)|[ 82x͆t{rupDž1Oi1uhx2UߝMNd9eVJr|Ч+m'~ .2^e81Z6f%V\܍s}F87Cv[BDFafBr+6|;^h|dqj' o#1r)䑼ǭdcv U-7f%N\@9,!jڶ"٭7Pll+NHP#;4+֠k${5j-NA?TZWP;O=oԮQ7qЭ=SRQ#=:GZfL5hݡ"/m %}L{5'_:&_0/'=vFBI#։gHEX^nMT7H |cj'NVE>U+3&BVހ@b#>}Hܺ:U^_؉6pdPq}v͍+Nxa:9[@W}cLv}]]ԛPK3yoq+jחzOg|p4\\{V+Vt3~퍸~>>'o&g%r|e.WTyw#4SN޵#ۿ"݋À?wnd2ܙ`XcYҨw ƌhݫToj^(E s -JRoHԀ(%8P/h 1QtBWk^c(@ kwOJ !DMRSS qD8BG(7og=8&0G`ce6OPLjw>S( \UrsA %,hr}Lp=eԇBFHϙ1D"!P#$b$8E[DYWS$(3B"-31kcHmLi`)?#\ݾa-S>I\./HĔ){\fSy[]1&^S8dYj'[fb.C8Mw)L5SHSVB)ׁ!ޡ!qP}ZP=X*%%A% X y{þI&=u0Ɵ‘Hm'Cq[u0}P;J)֬vfN0AKU VR2(#S;D*) 5+@sUv-m8(lݩP;Nav3P*sRj-xKZjWbՂgMcln9?i8̠~rE~&^Ќ]Vsnzl6 GL)f"=}ew>̈́mzᩰ8n0s 9t1 Ҟ,C[q3 Z[jwB.1^lQScʥ. Tr"Gj>̱ vǤv5LCvȆ ƌm3'>o [jWnȽZjws|NN@eQ;BQiBȃ5U]օW"_f%xg"f4 jR+/lK[eud0Pkn+xqˆF'>!1c;kHBQ ?&"!Z~>WH@LKu;[~2F)nW:?& B@ǚ<ҜwX>R-|Lޛ-nц]Z" x,-t x!&/ɷ#bNtcF=͇ݠp["i(ݲ 42M~X2?Ney^g3&hH&\G:_L5 0h .h\K^'hJgC=%vi) Kz_^fw( I=lCBEYP,D!P>[ۘҎ95ufZ7tx|hٴ-u m N Of8k=Т'7TɵcE-W3h)z\\~w7?|Eǚ[FR?u_&0zM:! З.b]j&X$}I)LB3l *cfuC"=. s /IbVi80k߯G7JU}C*>LZe{l' sp3s,x))rب _oԵwW xR u8eVi7WCoF!z1錧w ;n0=<ٵf~'޿aLu<`4N00l$QN9U9cƅH;w1eKpb&؎Rp~xN:O? 3=e1A%Lò"Ɍ1 Cx%<1 IG8F[e<&Qcw8rhf~d]Φ^// ĩ~^՝es?X ?Y>ˬu\0V\ +!B[λ[1bV l!HyQYxФ/jWg|4CK$6An^:$$v8$# Pt:.&5zE2.KT. Ed;=ti c.q_u ;:ELm૬gڢ2mؠ0ё)e>4.f%68Gj\A%CoW!FB&`zro?c? v6;3IMR&">QIQ{4C2/3܃x}TG{ X4DSc*,*X=ekEsf HHBbDDNš`_O[yv.4x!|m:FqtN7 Yő e KWM-zePIREY86m"Q=FUqeIӸ TǗ@ua+Oe݆k[3,UEUO1)vM(d^_)r+:n*ByD2 >I6 y׫7afB+J3n@kH"j%PGƙgh|\t[ -~Fv)JysAQjyNL#^q[9G=(LW# Nk dlj3G3D@G\iז T;8c9qtNBF+Ưx_qe&+o. 4R Djj=BN'6\ GW gWC(MPVOv9遇$z=f͸ŭ7d1|q[:Nr4 $v:@3qq瘥e {pjjݢZ9V ƋΗp8njBXV @wmc:`2q(;NdASq|ul@.*B.HO&?\ܿ$qHIt\On~Uf>(I×\(ZD!hE/ءHȋDhx8$:(\dMXR2jdsuEhIkNIS>[w|EXDMJPAq-LLL\iIdD&.Htl{v,5QIb7Yx <1>Tʼn"P)W;AGdEȞV5Ii--Z(႕UfŵPe% U6<+d7ѡ=hBC&ԪUct7ժb~ Wa37'|?Fy]Ne{i[g1ϣ )kx͵Ƙk> W՗>x6Q19BDK~{ZdsrcjӞց"r׌k%`Bf\i(Rp]OR";j`YDR+ЏzčWЇ dix{{(tظ2d gDA=| [+#j Q\])f[n#h)$M)ښ J\/zPlȂWENJ bSvK-"Gd\K+}uObﺸ)Ǭ\\Dk_<ѷOҙ_>+{,` q< oM2~&C`׳ϬU#r FWfYPѠ0vk#e< }{~=1>|SzKY33AA"sl|_4λ:6g7~]ww_ z0={v` 狋gLWYMz߻gZ6"9ma~&-s4).cI.m%w%e/EJh+Q4;ٙ^voo/x p=ӽq/^gbiwC{}9KLol{\V70: U}y~}y4FѸΞnY\63<,v2 1d4]km)/1>].x:t<ݺҫj~y~{kG7~<4|l?Nu2׍=ݚ/Ow tۯfhœQQY s={ D񕍳xO?qYM~-te39L~IѸ<33z٫^#K]~ uRhoP܎cIG!ܗts>no&HW߾@dB}ʊ*^Xo Mj霋TϾsI9fzȂ*^$tfy\ v躙S3#'$dQ,*D)sڦ`ƌs|;v {0aV.1ә֠{/EYq֖a |n9w!|dօ(tQ]-p= \>U5~3n❹.%c%@d?=jRlcՅFiQcإio,pGgbWc-K)3@T".g9tQ{37˞; }Zxܽ;5i#_uPi}p^\b40K s=8K\(dUyBB0rFMAbIq:FIm _r @3:G֚n&eXD]TbSnz[ӷ[qk-3:y#`郢=Q'H+uӚ?R Nv E< 7GzL()d)Z '."nvr/}x$n1$Ig...v1=a] (H5{;aoFwP>Խ8ŗ>q|>P#aqJڑ΍\\ʥ`u).֥KmRn]ʭKKׯ .e*. M0qlc`t)VsW߉i}(Յ?Z5 q&pO9 &\6f-Dt$06MzZǮ=W7K e36uld$V +CAqMxEF|<62O!΃_Fpƙ|j 1B"AwШWI $ fj6=;frjj2| M,i  ]@ 7ߑP+ ֑V;2~W02%=q N&o6(%҅{ [} "9ꣲ>i$Xp+DsÄ*0ju$XJ&Qa 揽 ]"Q!>1 `|i4^Pl }\H.:J#BB .LbkƏBvUP%{%Fiia?w' yH?xܣV!45%둋IvvJ7veBB*C*Ե!\u9hru:ZY +`m q$((4-AceH c* 3i•2/xva +Vsk[Z$#&amiQl@\&8m[ZE9TuҢ--"6O:ܿL#P-TPP)B4X`pZKjseBk{&J$gt&IVgz T]CCe$\YHuƪ"7$gt*FʫuC,0ֺ1D,@NPjCB]g'o=g+\ e~3QΊiogT֌77踓=n_ӛ rFwyݼ #C+}3T){A@﯐Aٽ'T%T2mĂ`C)}rU)< cq-%\=0Ԉ6L/)%&PLJ,*o'W2}݆w Pc ( ? 千=JӃ?(P ˗RluAwI{LYѯUHR쌳-Iv|dONT';"0Ώ m 7`Ilqy!~cGa*u#{ż޻z~@($TKOBPOda&)=FfJQ3TX]Aպ:v6n؝ٸELya_a;pЀӝуF,8Jbm;:0, {;ѸvZ]VuAoM`%.pI\dR.GW{NPS@3)1MݹJT*WfI_|$땹0!qPy߆OFXAfSaRCY=8L_{ָ.ag0^Zo҅X\Ki>u$OwWXVRv8gG\N0֌>'`2O'H kT{gv>i;4)i;O';q8{~CE"8`"pʄR1IMWv>YWxe{l 4.꒭..9>VluV,K~>6:16݀рʼn d@4\$I.Ib.2\ Xn:W-]PUDi!̡j&UaNbRҳQBVl NA_>_Sf:9w&Gh448CbD_6~mRaȒ,1zTxgU̐s06A<`㏒fEf%)r7M$\RU $L@@P$FC(K2ChpC8ҮX=kÐ/dlM4SK#t~`->YYB6W~qvf0,w6(eU6X6$,,#݈v J0FE(E2A384ƅ=8Z1V/DվbZ2.Žgj[M`) c9aqhw~˸y†;J<'ON;כ /.*]v3OȎ'y鍉}oAl,㏅aፇ.&)p]Y. '$=vp:9?N`]O&óSL Z^[ӛ\_8יDi(OHf]Tʩ A E!`Nެy6N}ehϏ&h-fΚDS(` p5 KJ OO{,t 94(TN*ƁpFq ?9QoFΉG~6/=Hfu/곻ޤv8(u&a(Ȉp b6J"(R [7eLacCF@Sb&8>^ φh>b*B-ms0cDl6ܬ;78?ǝF<di^u5UJrN9Ae&:@h JO4DFVei#<.Yt 6oTBڣK`1gIeeJ?+E'Ef{5(*6=ks6-Iq03@ iTQ`HBmD1$~mu*zΙ>̲ qظy~7JuHT߳ ^|g'}h|m3v k4Ս+ۗs h{pԆjt.6DL1Gd(y|w3[ϻsP j9ejvВ%(a6C/fڛLǧ90;0&?Ε-F{:ӎN}صSߔ`HU*0tg2vtCƜJV 7.A8~x=l L Awڷٗo8^9:ZnV'cOk41an!w/]KWx,D$ZKUX"c(P)V`2hqc4figYv4x:ף|29Gp9쟓ɵ]>ƕ_^;$/ԯw85!%C\X"vDT$ #%L$#&@`4"\D jW,V;#RF0FHC7U_T /zP!GЂ.҅\PKrY u)rGŒ 6\Sj4GN˥)a8 J8 )*qG՗/q8Jo* Rht0Xc=3]YJX IO ЖZI@R!2Pd` Rɇ5s0tlcTeFR(K01`MV$qՕSGEDV F nBIc%(X> K”1'1Ln(9J"xd[o-I\ 8}+pVPԷ|XUCDE-uq={nDjH.ېLpb &vlXJȇ"I8sLFL aMc`dbX %&Q\BLP)Drd,63Yg,+Nvx'tkRĤٜ+_prN3k`kc<O|U TlCf: 3 P8wž(#[ +HxU1;Dhw0#gBVUg Y&ʳ=3 ME34S(~T$!4nj`ۥDI."ւNilcMLJ&$<3,V!!+pC;Wf2BH$_pqK"B(,sHH Zʄ:34RPB?{WFJ;蒊!wag;=6| ^ik]:FGF 2 U*b_ b Ik% v)2\+%۸rQIWZ׎8Dh R):\Kc)V>rIb0~R "q]#\?c2ănj^W=1='V9wr5aa@;B Rʝf"NT1Z (QNI ]fg]ia%롯Y4ÇN~`?/,^~i3bΒ_['~:Ҕdz__.~JbB)5B/ //<ɷq3<~ۋal­%X$?gO;O::'1B$uwxz 0>`lQFH0i#%';;/O1sK <W5b1.H,hxX % ax0Vk"Ye[X'hUpkfcX){qxTxhڔe#keXX ]U0' cF*ɂ>L0t8:& 9fckQ@>0# 0)4VIr Ƃ>ֈ]{(W'<1ACV2uV>ǔ*u?/Y&6!ynLX`xI[Wm\pK _)+0_*NHrR/$6fF<HbqbK]`k$Q2p\*a 0린&`c0sZ) e=)KK jjϜQ y +Cħ!!R,8HJx⭫ans*V uCwU9^&fjhrSEq ߀*Z n$)\5r,0" fZ2)ՑhwŠ56*uѬF '2/y,\yi)M- c=1PB A#4:c1 Q,Ul] T,W`1 Gt`iD7xJ ś/`Og5M܌(+by:QR[RhnQ 46j uvQ%P]=**XJ1Pm )I[yu0-`l&B# y_\XVj&T09J_*g`+LpèG!;5vwH!aJD) 00l@ K7 )юL-"T{B[ǂv} s+pO;y{e '"Zl-6hf~&@`0؛D xO5 kk<$0$`fyb4eNº aJÍ_'5H "Eyhpj%rqLU`5Xچ""%Q@b B̛D^r^BGq,Z)o;ĤR *%$Qv)Fʤ1ZA;O^lHΕpi0R܇7sMB'wp!Tcd U0('^{&pgM E s(:;M B#L7< aG+19{HhTb*Φ7@V($iF^@*yi="J&y$lavVBCeW,q"zTtK1wm"U`-cj<y*zZ@30.]w}<) Ō?"̬(m̾bop8*,`ne["௤Xɸ11{b /F볥i2 7V(Zֻ]o g(^u(A>%ͨu7pS!C&%9(8gܴV]@x((Ҩ> QNG!3IÂJ Ct4lfIR@P `a<7r}>VM1=`VRKR;$>6n}|S)/ˍq\7anXj,B3XɯҐ]q9pG:*ŀ ~$^IصfᾺ  W c\!^碞t$*"vQ2=D9i؁:]lW B0ִ\\-s!գ.wp{q3DngE@߃ɩʗJ%tej 00_x>7oݽ]=~-jckLZ!AOzuٴ1թp46x̩C6JNfGU&-ICe*Vs0y+ʼnN_a|Os~s,6 rKlN?I.[Q *:C&0F1!PR+9 EV)j1tIZJփԓKb0aEEayXR_FC\Ue~-[)本 5$QcYi1L yߕ砪pFcfm߆ۓY܎`n$w.oK߽ߋ?n~3:anw^ Pu })Mߙe/ID;OZnso!LrʐԠrl{#ᤍ8D njY<@`3w7\bߝoPaV)P?^>{uFUg`:Omj2PE,l#W4hSS)撢m^A%$h4ϴF ;x@j6*iEXE f,*&hu`Km2>95^c @tv=VhX< !x_* uaȻ~o_|%v% %[k=/nf^?$UV-  c3l5h?~WM2/]MeQ;RiZ4P(E< ( ]0,p) '"Jˢ90˨sU,hǘpu![ޛum?'A@ b[:ow,?oFeYΑhDOj5|QRd},myIk^00 7aNm\b*EjFux9!zq /xꪦ,Bff/K7˟&"uuօ.ۂQ!z{=뽞>^-/ \~oʄVDh]imCVAs(( o.D 4$Mga:Luy$1+CNu3\37']2/a0ir5&*e㪝L_so7#AK]]]-`AD]!B]ƴޅ{NhK9;!p9Q7ȴ~vtqoO?O8T|)x}_6vͱbַVg0Z{q`ɏ//֑ɴe{#<;dz;,]Rі1[()FquS;FEY۸~aSx"ėBv-OVcq} |ѡ9{Ҏ]:&)y)6O2"Nv~xc__EbzOSTBt zḭOFMt߮9tl@g\zoNI>ߜ~sN72{^m:~}*c?*&1a==Zj}Bq~Eqʋ2L1匥GV};>ꠗiލrm5>l{w0H8% nxّ}$j&iDˆ#S1RBI4jEҞk(oX-;Z-_`|Ov"}Ri(PzRd1%h[g ˀG"+|gN(' kɫ .vk!K2%b!z^\foW?q^[ a+M-&`PKMSJ ׀~q [ L>..MSOy {ud{i#~ &"ïo`I+`\>G7[6b|@!#\Z,&:{C4gJbLѴ/&^N"! aV14^Ñimoŕ;;iMlg4]׹n/`W 8|n`v?|m'-z⥂RGH4ЈutBwɡssݑ 1`-Y{l(P3;vׄ,k'45M+_m,O5jrDz}Uy+׫8%|K>\❷]?mo}*]-g8bΑ.Yeo+E{$mqb!1[6;031'wt'?cy.ٲL j_lݰN󥹩AK%'f|b3?>s8Zqd2XfQ8Rr#?kHkδLΎ!I΅F*Ѱ$PW;mijb,4$;%7 `JQ )^Dm)zYs=9[sdx8q4įIQRb;^0ņb=V6yk9EȲ܁c0u%FQBZ\JjrEvZ6N{X =/3 nj6 Fo(RAC7;_wLFZA}2>:Xw FY MT&2~J -#GqudLF]&{j_B݄a½^7@4h BNZ,٦IQ&N[ΈHJȈȹy5*2x cpm ]$LN,M&B`\1 !(РT $iLZ<kz`$-G>Mlh.EP*vd x1yuQDɚZ}t@ ؓi(7{ED5%"f5y$ /i@$h &Ӝng#v f QQO[2gE(YE|挽׳(3vҞ 5w<Zl.!:3(A)i6TP挍\-i,b$g-&{4jTSz 3 4GP:>C'` `78aQtHSW"j Wl!50.I-s"')h5P2{v6 hgًܥ !mJ&X0D;l QdNREX *uF2<2:g,E1Ű~B݂ nCպpPr WXƔ"*Q] 2[ T Ӓv=)@L#`A _͵ #W;;Ƭ蒰yN;b%[N,6u/Ҁ ̵wW6|D K Q< JL+_ ilef_T[󠠃J'DlY4&䎉W ڭc*w&8'|p&!3o[ ײ6L.A0V}4Թb1@l!^0YcD@p TZGBr#C59D5ƽ()(}ke CA/dSuh(Hd`,Erђf,* u5'0%=` _#x㈽(R4y@D >T'.bA\f.N'A%rԭJN WB%Q}2̓ vv3ʙz8;[;SWJ '"Q9D #DDϾ"0p.aŮG`Ku@XpG6]Z%.Pfi#fV&1DZQKr-[!^bF%ʠf"حxoe2=u:uPuaQ& :d:ubyMg 5je^Ab; >o,%hvQM!jUPA;xmˈށ]ڐKc.^,s Ӷk.h JΨ4~@׀R"`D"<( mM!g(Z.fc;6p B L 1ޢZ۝3>; P4#L٨k8nHJxK `O2@9o94r#!C{CDhRjzU XM-: )cj*@iG'6)0w٢bk5Q'[+˭ۜR;ur"o UaZ 0p\tHeQH0q졣T#ϣ^:P%Es OokJmLL@cs; Udn1<@[ R&_{&=2jqjNb~Bv%;Yq8TmPLKf7{nfawq*^@P("$߼mn6{t'zSoG0 LK"OpsWn{9K} [~6oVO4yoG7.U;>׆b|v1g3o'CP)qV`n5,u(7r_ yRrzėyh8CYESz+}'[KgčP(,7FM߼\CӡF;ʿFŴcT$DRm@A CM\NYTMV 퍛*?2Ol2\nbC2Rd;>0or'coH:.+r]7uȻۯx#JٚC纔;\`O9>ѥwOڪ^bX8*(+jщJTXSI<m'6Òjx=dK-5mvrgwcP@o~ͳ\q 6 0 lK?`sLA?ܸs- Hy`l +s銽?/(0\k9dk'e=?jJ7 Č}0@ ># =`bIY7BTOV>&$V"3cUDg**+IzvAd=ϐ1N=q߆`c4dtZ39tIG3۽3|z ٩? 2j6ng YA-1!zoC/ 8lc(V.uIB#L&go E﬍OR&ȩri^LQSdLYbXn? F_S|vhǿe{kJ!?{㶑俊08v/#|@8wmbE6=H=1ݯԃ#Q3|̐YU*}IFI#> F+@u(p<(xNX(Ba$q*r6cR1:V F/PňJ$/0!Z:g@<)$ӁTv2X-1rbE;v?K);4ewl@?0ía'Ìe{F~h˖%1 4#DMF͌STPDrB|ɬB 밀uOG$Gt`Y_'4!X? Vi]+'ŗ=_L^yPxbsR0+.,8gLkř>#gZKj.;ӖQTzpaוWl8K=k _  _Ed7uOrcمM_yz*7.-ģMm_Nm0#P8/{#Ÿ vo |`aLݗ(y6 SR}q\K~6+S>*rݙN_x0ZSt/'VKvJWG"XqJl uF CShh`aX Ձ Y!qÕ2 r[鬙KAE|ol)(a*ւ㹏 +8M%~s6gduq1H ̈:U%T(hi,t8nQ'pF{=C!+uV|u <ـ4WKQ?RHl%M̔׸"J* wJᅉ[ss_,\ښu׳տ?}9%k2Ƅw6'zġHB?uIosks{v6uAF0kF`j\հk32^^-^vl3#7^y0;3K7x7_-V qq }fÐ!\{hwzG侏H0" ڳECpXezO*h61q/_?^E޿z/RB]װg vTFغkp&0Zg|6$”-mCxLyJU&זbL2UΙ"\\k1ϰ̐^b⍯S^~-"?3p k7-P6NF {/ЬgmKņTj2*pՁ M|e((?%=J܈ZT#'|zu9-)> чI #AZ0eMqrg^Pr%` Ϝ\AoH2KqQl]bUO >ƿqrzBƘqz6&]|_%F^T,JVGOl͇Qr[NS}vQ|O}7 h7Y9, .(8vŒ ꛴x#>Wg"φOЙ&n\J*Q#K$ks1pa83#ΥScbR(Ж#OLPq5SǘȽ!>Fy"(VL,IzAVz%Y,Yn4~^X}Q-#v6^N/<&x7y?]SX'HIag9{X"7+qDo W}hMu{dҁq/ρ.2N@_Ƽ |cZKBhJ5L LpKZ(< NPܴNM.g`n&`ǓSO2𯛦}IyVh4ZGjf^&R龷pN⳩lz6E˅s4Vy4gGu'^\F\%yMm`M\wXSey85a||QSj;YWnF4Q0쭣%"܃K7 Bx2Y&Xl`wÔ0gխTGK,^Vg/^r z4%7K\ş9hXod=l١ǫo RA,iqGRJPD-n2IJn.NUaJa.X~\N>X˲zoϠSn@i(dqJ-0BWUK>70gc >Ty*)Q2GT=.\8v>xϛQ<@|~I-[ao Io9o`-WO2t1>/wS"}Z=t"m;|r`}oZԐ/}XOAx<͊(nU+NcuF%mLlӝ;+)3a wʄ2mjNUh3.<ͫ+DROÃY!$pOf[nJ/aÒ>xd;߷"`/Zbm͏v"]OgA~N>kpa9혛@13Ԡ1҆9tųlع_`BRjl{6袜1U7<DVg=;wrC1ӎ>`RZp܂U, Pg8┵\o?<("H! h?H+H; g=D% kQcK^swab!6-ڎ)RkB[c͂xI__^n'jKYd=vI&mGW~'wwbsTXh/{: Ӳ/Ƿ+-j +ؾ irt+O\HԊ6˼:oB`SҖ^vI7jv#%"زd &A|OqzLFRZytFlZ Zkn5en, L02]%2<Cjqyg;Y}dV0{Bq[GR()GWi-ei5=( 2M`XQ(,@A9Vo%TABw|T S4Bc?<45H;Sk=M;174ȀD*@ 𐙅u7vw4P 3Rb? Z)iM暔BCKJuh-"M`ōcw,ƅgR@X[|遶JP&O`42$42DqTnVYY=YVWZ`>KX/kOڶuad'! vIzcQ$DN 4,5,szsKZ1xI~)*Od=%U+qHE/4}!x`85MhIfSc0f$Y"=eHTshztB_tJN4_wc  #92%.@>"Avv2Myf~n3g?ojwlgn[V aAȑ>Cq-Æk+ĀYUѻm- n( bmQTJePhAH1٢r'D4E}E.f`q@51vrFZwqhDF~$5dƒ-{9e:A}zK MH=f̦2 lޅl0=n~4$C?Bcy)]g̃~HZts8߉?SwFEbI?x]nZ1=Tm~aC58ЫC>UOm醘0y:0e8Tlq}R9E\` )oX ҕ <$̘E@N[L_5(aUxq;f*#}/VT2Cԙo:%ƴ #cmt\ލS4/\8I'd ,0Pn=ho(L7FPk uv#ti!'QBR MφZr@G:'+6{D!6@z  lS7,\@9Bʞۘt*&]爆6Z#pC{&0,Q*r <>{p$1LL_0f#I|"AP8;vL~$Odp-2tcFI"B'չ=^⠰S;V7 !Ed#c L7pA|`Rh{.bۨ!Cn!1<8u^nI/Hjdk$3jxiM4@.{&} ^%ćtW4"YNV$[ynN/:(VG(EQ|8H1lWz T){ ²eWȞY@DR+hD328j=a{Lm!i:yXE4+zufJy{ppټ^z j<Ɂ%/a ZZD*`Z#;c,Jk۰GAZr&G AG5 Y7v){/^O#@Iljiz?ed3ppt[ $ QH$1$YaHyi)>\<12Ah G2q&aL"KƉL0 zIT .}N[KaVǁORdŃ,! 3vp _euρ׀bb`S>[g QU-bݛ :%+݈OefDSxA"`~QuAKB/ :`Ah%>>/o,B-v/ $))y{!lhOɰ1c"(!"#Zo-׍7.WmuS$PWZ-ndrf '7r8`CiT;pM]l.~:U=P ^*/{=fsş D2<>LNsv>yx_?TsAo~ެo=SZRNvY8Q 1$8C£l(x %d8Na%4;tL慺ztA?ϲY7x2H) T|SӨ#|#(}qa$9Y<@!O $1 Tb殁qb#B%彞뼘WJ5#UTs4qwf9 Z NRCH"Ѕjl0^*_M'j{V$&3C/#r@ X+;|%p {Ekj=DJ5[,}jk\XoO?@PةDޒ|E* h꺾ډnL00LMj ~>?_3*' KO[X1ڛ@gY2dy#Ƹ4%(Ѓ(qf4V_Q(,V$o"͞/H\Pl3!d]՝旻gv/Fwo,g"G}C}S%+=4\ߨ{d=4J% G]}?.u&oœ:W^Ҭ`tt~2Qt|:ZW/ * %qшQ8R8xJ{p0'1o4`>BެzJ[a=`2\88΋9dba:]B}}8xRV_ʁ<-oՇ){\݂zF˦ρC׹Cr{ 1o-F%7 "jW\ c*Tw5W\d'鳝bUn;4{g`6x1~V+T@*(mjZ4 ͵ZcѴ ]&*|8mi6s9r:'F*i 'aR|BVy_ki}-E]z>hgܷr иs74N/$ &͈a:Yv(^PW;Ly5:z"{O+B!TbUth=Gt+N0K9U yyF7{TDg{ <8@`P;8e^O!c@1PyÕY]؛>zr[H!ޞ#G+#GӷϢ$bEs%bYAiofz߮[ϦbZG{_WKK<WUǍ<Sʸyfz/Ϸz)s51w ozxmyeeAaTsN)5B3Dyar1D19m=kj>=cSsT8+hOϝ4lDQ^: ȷ`/vx3zQ-glPqd)%3R9Fhh[,gXL՗,"̜ruwDV7bL2ZQz,*鄝jmJ,sDF BT5=hTpA1F+:,WDGqi %;na=8ei㔲!=axČ-@ȱÌlHT%L0࣑)q1L3PdHq* LX'8WT&(Xd ijW$,F5g"!$ Ǹ$"xvcIRl!F;Gh=Azaݵ挨 ;j=lps-¨gv /:V 5IGum^ [Or);|̷l͌+Qe)):vǷjAs`dg%^t9dyϟV㕽r ϸfgͱ&sA_t%(.z;2W@"a0{3 $Ѻ6]!n)fz^)ʿlhl.nTіW-WG=]Bk{T]+uj(<숂K3CjͰFE'#(' sZQX( (ޕ. ++X]ݻ?'" ⨧W9E K!5Vm&d(n$˂ʁph>ZAHGκ/P;Je_^p??( :hƺIoPP lPpri@9PHQ'@yO J&ej fӈQ(^CIs( Epr)h $),L`&| ꇸwdJyfOl;yE{ڋ5٥S.*-Vu8|ekne%^soGoP9@J%])0ᜧ<>&P^p1R"Uns*6ku/f_cdYEs[ =mxX'O"Q'iZ#~-w7w~)?}oyȊ#,JF,M/3vVR鿛1Ro]ǃTWq2TYT׍M˦ w^Sg«nu`AI;D"03jO%Xj+W >B锈VKrB&ަ+y'-5-#ãw g|?gs<']3o;hq慆e,n̖sz0<~tۗKDw%')e2mlo)/?Cn,uH3a~J+|߾Q YT|WX)h@)]۩.jcun~cکzx1!SO=TrM\eӛ{8?>+_-z= ;<X{GDPs6|Ѕ.%Á PM-Ţt@0ܣķ I& =|fp۸t @3}._=+@ d@'"+#GEc6%ACY vg1}K/ $}ɪu̫%2,I8HF! ׊+{d MrNW)^!WC#|0rtه)"ތ>|w?oo{.+0 WRVx$b;j3y09t>'8=I%ͅ.!&q ,$e#ynJCAXx4zYYq$pCRExm_g J65w S.\*&yM>(&AJxJ+f-\Ip:e7}|V ٰTCNUÔkp 0 o>@Uϧe= 6F(e]G1Ⱦ&S TSfHfͶÁcsY)}QNOr0ZS%тbGQc$¯W}IxsI0E{fȓ3 7gڱKfK׌Z (OxL}<6 LvCvd$AJL?#,M"2aʦ< ^'=ϑbۂ( RS H15-~"n{_|E%w}kle/U{+ݶ*HcNL3EQ|"(0gR4gK>`Fs/w0MNք9:礞ԯD?<{WC*@Jl(%4K R3eTLj "NRxG)c<1 @Y'#, cajZE7)Q^m{>v}\})QτqY@(>C 񏳌+OIG]bP}J\I@qVpVQr>hFK5 eրa1MU`ppOэ34/ut_w)d(0TZZHjM&J#5/;vF1 0l~R`2&~j/ܖu]A; #9 >xQh48ɏ#:!c!27(c0,8kC٫WZPiq/3)ySRZի_?^Ff %sYΛ w^lͥYIw?giE|%y'7W|!O8WO"B_?$7*z~?kгWU}r?..b4Cc[I`~A!ťjwøNQm! Gg)T}c\(`<@D0(#h[ ={t?.` F0LIH\:xGIE`yz5!qlҿĹ볜X{˜kyͿYyw|-ogǜ!W< -eB0&G:H-$O;\Dh}x1_XA6s1gk^oN(> u~˻ز;BՇ6i֣s0{T(%ݏƭ0&k}Iv܊Fm7<`ݢ0P9?1gK}yfpsse>Z%F}po{Y071c NJ2Auɏ <ˋE'2k;zSE_O@Y (^ ȤPR+9NG\yG}6QVYF\J0Ƌ[kzt.FOW&"J3qx\(Ni?EAލ˨\;sEkCr. jj0fH/Z]+>thvZRR'eIrCPy@#Xg3pny0iEG1Kn RS6 Xrq*`1J^QdAq"oX*i`8p*u{k )ETb8EOwDJ(=mda-H=5JKjN TQmAadN F-0q6[ XGMA,61J)L UVhO4!괎jbM 7M0u?^_G9jE'^k3^EU+2^fo^jbصri}$.6.*$e#??"@/o4z nl^-]b6)__1n^ HJk\AݥY* 7aԏ,3#{,Za/_Mӿ?kݕb?}-=\k[cD{OEHjYs5v+ GtJh&n#Ok[CkbBs[EH`Ⱦvw+ 0w.T8VJhփDMHw!4~˥Z<J Š2ex [_gKBv  ebJ;+*AZy t}אyRƯTCJD2Q Dx#E #]>H& Q^nƹٙBJs.\r\5˹h-:뷿̂9C._n\؍q31JOsd6DZц[*>b2zw͒n=:Ԉe*+LCF/9wd=cTGYv<]GWcP@N> Vn­jNi^4BK bdfEeA }r_2ݟN%[y/-F%ehp[֣s08e1U`MG)ܺp (Af  4]GLS1Z`I&zH.pc&z+8=,%B2+"#rt);sc(F457HXXu.|ɨ=.xD[HvCSrIwdk gwl C@JPש:!HOsxZu0 JR0Lz&BT8H`T?vLǿIR" Ahao%C6(,ތȧ{x^(wL %p3 D{T@EK>p(b/R3@S, x cBħg Q*ŶF{;jEkN&L+ '6 pӹ Wf&!!\DKdQ2{h_OLJ1>hSFnńj>$䑋hL )j7%esnNimto)xA1ڭ y"Z$Sw+^|>j*M.J!ޥNGKM#-FtE$@"BYjEtvQ,iÅpDb$9$&y'pTy\D@K˽Q+%`J1Q.}z_RCB%nՊNJ1>h ٬[1ڭ y"Z"S{{-4٥Nim۔Q%݊ n}H#2EOEDas_іmmHf9oh"^d/NqtUҾ6݉7i6__.z/k} `ܿ^ {[ZwPYG%[Ǩ g&NJyeB#!qv=4/c8,0׋ޞe\y8Lj=AQ%euf@ O/n!>IJpM\ 0UBI%(9 >f 9syY0yDw@S;6oDv}Օqx .glW.į^M)5M)MTFZ*0dRib,l4WoaZ"͆iYw= j7c­7jk$LRK_g¤SjDVM:bD1;fJ&RXR4_4ik<[;aA[1l<585Ƽv_vuѹxx}AmQ6ac+֣+ 1YmG]lY 83(g!q =+yL7$$T_1 pPAC!z7n{ Nʈp ( ҟlYS<ޔT_RAHƚ Wã[%]=VҵO%C`&'3Gb^GwF 8!j !X1/wL+QK؃?+,5DöLWX ab?5ckgyRx?*ă${dRet D&έų3i׽./FUq6_6eN<] 9XhK;/UӤ:n~@qUӇɨ&@vb-t }N$%­\}ɪIueͤ"] 8NaNRO@$c1td^,>f#\q$z60QG=KOoo"rC`Mf +4Iy$+f__!i$tTٟ,>u0d%Y@+$Q9>+'OfH8gs[ ^a!mwvFk25!|Yf)'kvw6#_n8YL@i.u[jnmRʺñ>'>O`0Gz1Eu^l!*@:A, [<h˔(n͔IS-p~Vs:t.h=6Tb@Qn1N:vB-*Jf fl!cHBHsplKضt^PkEy bg9=A(kyUF9c:3dXTH7h ڞ DeUrk6@@E#tOeCr_AQ|hwc2ͪI>cOդIb :3j9/2n?#*E& Df(e[0DTݔ/:q>:F\ICC*0R$6vYozk~ӥ Y3bgdۋQ1/ka:}_(*T3Ջ_E}sVwwy}}s1?{tϔJPp1M/\ki;e3vahs 'TDaz|tƔqK͘$SOj}'Cİ5S\9jj[ wsV\ЉȭDZ}FdFIK+Ug,X/lƯjD Oz+y=P]@q|##Q+m֘B\y|h= o5Q&'e\`kn*Wn R@ݝjAlLGv?&4_qf0f=^x?wwmW>L>MVG7mfhbi<j{0 1tK>i%2yۈ Qxs6Z Q#n=7 7+Nw;; w 3wƱ~?&*G*%oS5C'skJ*VxQ5[׌ "U WvF$`;%JI[4FD+^ݹ"+\kM F,Yk(w BQ(.$ fPX) 8|`ղ/@e!UKe[wfUQGF"û,yׇ-F5ֶ31Ā]@?W3#?1 N N c#igT AGM T1z1ĪoȳS|"+;DZRY΁=mHM ea~Fʹ&8B&%$ܸ^ݵ]?;?w J+* dtanAM/㈿v#ڍk7⯫#~l>FR,EFT1Ftg pJ2X Q)Rɹ) ~q|,BE֦ ]> #ʗsc֬ ײls1۷g'ykuj,Pvֱ4ôBd&!MYJ)2F#$J2Y Rp)cRmzLiA:f7 u\9;Q`L,̹whr4sX:f{߫gOt;;3mq}\~xJ*-lyaofirhL`W1zW|e6m?Tv_l<,ܵ2H$Ͽi}ՑEB#YɛGU t.p`S 0ULh/h;/P>`Ɨ.)0@Y8Pk5"lF'/Q\FC/7{cP90n㫸BG<1b_ouvcZ/%6w_V&N\hM[4W Sxv S2CR;>*`gPK<[[wdw SC9ZwP%G CKqLݪ헼yVOwnD DT*ź a[w+0 "p+<}a _~(V@תƣu<yr[|ʦ;+^CkIQ6~2 E^j<4!]YW}- mFIQ_2v/ȡrRt6 %k0%%H֡_RBTkeŧ%T%%v/</"R` Ɛ{'! 4\?} zQS{$?TzLZߖWA c$)eGL9-ì5s`5& Bn^M9 P|77 mȷ VU6^(_ __y8| Vl؀{03F2RQ2"H.(XVNF򝥽0WckJJ޼{|e8S\8"U#5k&´ơٷk-RRtI> 'ϦsGSx"ir*i} C59v*{~ ÀS v =O&~Rt~a[CFA0ڥ%g$r?\߲IhsT!,tMBUyCVdˡ_rKǼhyo65h 6/5cAk^u~nRnĶ}2@dihfl6Cm c#m>9:Eݴ {a8 |`Uuu>TzHc tdb:4tQH_wxTd.x_wԴĞ& hՍGb{p~=Iq,a/ !N=01üB`U/zX[9YX75ȉb=`POFvȾvhq)ueQB@CV9@4qHS-ybFIJ!F+(:K_ħ:;`UC˃^Y<9$&gyq͗E=,rrBmtg`^RELN/kknXŗT/CRN6bgIn8)Hk@DutxƓOw=5 cI$zd$=4OlOuoD(mOcȩ.xe$+=Vl{ 6Ɯt}nPhx)#݊ƥ[M֭@#ʹg釻-&honxBhI[p`hp AhFʆjfWi_2<ʑC[ߜn@#ͅTJt!5(ZuB!|b|iC@)pPr; #f¨G{0R{Ǔ֫wH΃GCYD12Ȩ:>WH`5*xR;DO ! c䣙CӠS[*qԖ@jk /%`רS>\z֣[V-m#ƺ`{=!Wm ڣѠWmulJG=j}6=S9Q˗~Yp5,|!/#L"X^Bt VR!,+kY-auؒ5*)CR82@u^gqQJMK7nT 54vH5V]ϵCH I7yD"凫bN cloo.X^T S\ߋhph3 R\5%1KuӫoޜN܅ ]8 MM1ҴVsa.PMրU)YYn]>|_7*9<>4|ݷS{3y)|{KܳHܳHܳHܳ:q7pՔ8avƔ*8ARc($4b;'hD 7&&H Ȩuo+ ނU%@G[*|IO⁡fje vd ,'՛z)kZ^h֘i$r-#++rzri~\;n x?/oI`ShL$SeZZere@l})1oʅ4LZr(DgdCդ=D Y,&<m0 ( {U1Ϊ,ưrn 淚Uh}uoV`>i_DEIB N&/@wofSw7/&0n;=o[ tT,.gw韹Z+ǛK> ED.kR_S %S d,L̊BM}3{ D4/΃90j~:i}ag ğcq*&8<`g|{*m] 3&FbafOW{-WԿg_+84d1^5/*vL D(1-^.1 6^qiVGvX?V/N?{m1ȁ o#Cc% xJ`86H⃣39Y U5cګ]C'zlh &;pGg8ةP /Ta'DHE)!t-iؔIdI)& FU0"SY:KeXIh;I;%O rͧk(g{eLpȃ9S84 CBM`r<_Z FCl{AteXh)(qKui#:e%88r,#̓g6~4mk0x:z8S ""9SVz15AJ43BXc<1p4i f{3K` %s`{vGBdPmmr\4~8P$zVSU)5SnRX|t*H(l/g7GVi:Qt ad~=:]!J3FfyFRRcg֜Zg2_)f5!F[dƏ'?f0eQ{0jMsT0M+~$?@Ϥڕڶ14Vy >ma< p,623Mxqn )uI % bȁqbK6Ϧo{A[EaByU(ޅJ+op?Iuq?[DՎaQ"`*  E-,Oc4f4~ :$3Fˀ zXZa1QKJ+@ɖY̨/-[0X@ՙ:L=Kڄ$FJDJMn(Ü4"J.PB=݅Oc0[9g:y 󓷋!Xp@`I`~VIdPSBM6'eߖz̤&B4R-t|&I ޙo2F5Sˤ"w z({3Fr8r3_']<\OtS|S>9gu(n*ljR$m@bvJ`aGp a R[#UIbk/u՝jd ǽNf|fֿTXb=ΏiX(֧eL3iǓ[ĐՇjmy)|X8]ݜj)%[L|wrZbKg$:7WaQr4/>"l9dg>ƈ+[R'Oļȳ9ncݣ+wyYoòfG ))F?n*6떫>uߖ ` K`-{֭ |Z7[R&mQ ǡ fݲjݺ`N,NQκeLP]K`Fi 2ߣrG[O*AC}SԟBGRt3Fv3Y̙x5!o _Z(fJ2q)`PII?xrB Eʗ~~ *UJ)[1)B@kBɨ-a#AR)1,V1e,VRSEB\}h!ᶕuC.S4Sѯ#zhruBg4n"Ծ[@S[ ))MGq>nhruBg4n"mݲjݺ`N",۔ ϸ,NG-8bPly"])ϸUy=j*O${no~`|nyCcJ>aĖIz Gh3w'wgΎ8i8I*!riUZ轪OL@.1bZ쌞=Dp3#s3ӞN洦%k2']JŌ td}fIzVvj5C^0).k}+0CxlABn6=KoQ/(Dн:5߳U u3^$B*AK'U9vݐ2}*G"ŧmVK s D9bO_R+,*m8(+19͘xS[JESt ҅{-jqF64 IQ{<50bΎ=t{{qZ6yM:%{CPҋ`#erGsc#Z2;8 qmIf"BeAwvIq7 T`QPIts}vn`uϖsޯJE6tqT Nx *w=&QRW/7B^\5i|Я)~zbRbngo'[<U97R?sߏ7p}lkp* gDz:e=oո垰9tRӧu^ G6xPX,p2E`gׄ)RPj9ɰNRX<~ E& >-eS0 0 1A:\V2`% !xXH0Y:oZ1rdаb@-qc6!D^U\͐c){zUчZ#ғ~l[{3&?k4ƤRi3ƠJ V*OBtƃkR)K9eβ +#!=[᠖Z[#gȜ!5 z,C+d1DNYlJ}h{QˎN÷FbwV_;a A =<f|PeȐZ(Z2(^j`1L1PS SKKmR@.X}{!Rb C6IM|z_Xk]ߜ]Kv3 y2?AZ[vb98"gzH_)e1;STއZ``{큍vaKdaq:,7HIE(f-ʌ/2Ȉq1p\yAUp?xN'2AH$԰.Y |,T.ڍ%&%B}o1^6b'ma]a=Z2$԰DNMjR9;>iN:KHıvI‹TBT' /B:^ 2w/1$԰[^a{uS*ˏ͖NtdJ':H],Y.׎/pN|95Q蚥Y/RB,FPďɔn4 dL.v q$U;{+3$b_R55%Cݑtx1Rst 5"Z& EL=@n<`iAKS}H*ԧuܟnf1)r_pCyi^Iy{g;1҆9O6$hE6U=__m|2_r[8xmx3f^NbʝEH򐃚b m`AʩnZ.aفwE̒&p,j]sj'>reem)K.aUˮ7^›[Nʛ?(BbOxQ/<P%D9~}: C `fc>VҜ W_?y*nEs$j`+ 56ޘyb'8[݆xCħ %!>-)oxEyħp"> ]zϧ"> 퉧+?zT|a :lʕRDQm' ʈi jQ{X)$"F5N%0>U7~,QNC0~R.2y^?*'OeW`:Mn"U䗞Uh^V䶔H'8z6M{?)Bͫ3Ld+%:f4Ќɖ(]v؎Ta6/K#$yq wϐߗݶ,?Kw @b!K$[nAh9Ǜ̳͋'t8;kZB]ڌq[fv5{' ڝ$^Vڋ~գJHR޷suBvc Q8EyPT(` < 1.,GXj|Y!dMwA(ԢghI%:\Ugwӝxe.&<Y0Zv"Rb"aB8.>ڐ@6VLnilWjIg| #ϙ4gڀsH9@WEp{{vԄҥ+$m㩰KGTKbEuH5o0YS^zƋG!3J<3@ߋkœ(Ú9"b+eP .+=T$ W;Z{-c$ǂ)RHehl -HXqxb)B@Y#{mTjKnk¶PhbTYq BSCt8D0 KjX* .$xZa8ONc #8Z ݷ51.xBr-Ʀ XQ JE` S Ƣ>U]o"}PVj:e5/fco _o5 jA\U͵xsXH: )Å :8Åpa ۓHFY#R -B?^eoRT; mzd}pq@۴3^Ջ'Cz7\v9K/SekvbC(@B!`sn)/Rqǀ})8 v<A?\9˖bvsk4PG}疲T1rj7jD]Mt RŨ H*:|: f@1?F؝IMqQH@I*gAȝ xykkhUbU)N͕px'|{zd1Ax1\IbӶG tl<#Lt\"rFSK.y ө5:3TSG^Gx"Gxc ( XyAt^>@uk.tZiU>FzHU\p  kmnt`c2{A^8ΘzVHq++(>OsV06  H+`D@,a Vn/7\keJ르1e^]ZRI nM_i c$;r|'8*Z:;ví1f hf fF5_5Y [ j4D# mq0|`K#hJS*š+C$.fѧU'L͍7{G&JCт$q%BumD嶘݌n%/Y7qkͧb٧=} lWc/>ipk;My 9V0EUXAzrz:t O`A5s/{YLn{V <ofQ>)[[4qjt!E&0X a|{z2wr1_ݏ@$2F{WF2SVo+D?eK(fU}|f3n6nsސ_ ]]c4jTzoi~3<َ]{}#+w0mZgod\ͣg`b]3RVnF&Jo7DIc_~.o{ Jp~O4_=pƓHz?+o/}<V.̰Je#3m=:ޗZ,ҕܗ\3+ ҿ̮UE8wΎ>#R,̮Lui OqIM]^߼ c;ZL\K!dI2~ee [{RnRnoE7lEsx^<:}1Ѱ8[ }HgWJ$^v.h)&!-g4p$*錷QJbxp4P 5?Z`(~ 27xQ=+,Tpl ( :pa 薂hm}h!WCP7p+ALQFh`pu(Ifp蘰4,}Z 6}pJ`3Cc[N0,B.V4FiF0 ,m] ^VXoyovuhz}Gc sv! L8TDv€Ev7D p]^INkg测bK.3y| +Doein͂9 ?ßGB2r)MQn oHga)93~o>"ؐߗ,U0.KS*;e׳i>ƛq_t˃RX8 5c c/6U㑑E 2ViB9eh `1{4Q:kmdΑUFQ}I(̃7a]+4ԡCiJRO r[,fB=`XEJ}6o:T A @]ΟJwl!Ti @Zb)^{s"T0QH׌5 ZRX%5aj#x*Ӓ?{kGp&lR'fl ߱6Ć 8vCz &/H޽)`\2D.%@ D&HIm &b 갢FbMPT=L*!`MɍL*›XWR ([K$RB*xIdpQ0o()63=5d@WjZY UY(}VCrüo/~2Mƾ?ҁk@·_->Li=pz10lb6nm6͝bouIiDH Qp@'JZ =vpnZG7X+p̀7Qs}DE=~y\rk/Z\"A7m=^6mM[ z 9% v#A>Ð"Au9Ts$(zq aB"U$(h>\$٠EMD%%'GތV5+9,G8pVz&VX&I/U ]"GdsyefqylL-筌,u*ioe.0wLz3V"ghz4RfG ?w֋!Fhjm?HŜ4 ʤPl2-2$"a9jޫ^q#J' VFJ=dbRsܰ[ƕN nzHd ^ƙ(,Qz͗_#Loc0% zd 8$SqQ c\[OM"6Eksk\^jϯEa Zciqb6Tï+XF`DKJ{QBX QɬR3)o YxYm& T #5uQb~vw;$i#BrF`J "S<#B\49B$uȋZSAF{XIBlI2%H3]/&p1AyVVa<:0z4E5(?vNl8[%4<.ʷU 'z ]>YDg-oO(v.m%J'*q)qq,4E{}.~S:;U\ǧrVq>2ܬRamIl %?w s̏闟br7}e!FmyԎ`榻`졪Sc!GCbn(NJxZ5L`4IG2J1bW[cCec+Iuv)}4%7+PC)UsX])VY5ǽ|!{EWTHC2wE"z7fH"v70E$pyi˜ge(g{8xk F26swi4i REzfъPRv3hVpԳ9R!$z~c]vcMGoi5Rϡ3 H2aU//:(ZHcuO"by*#jbW*?{Z;oPnͽ.ս~]b~ cq~'c(豠ԎvtlN(/}V`.PE?~{jTlIhr6^&&NŚX9 ¥+ ?Mgbi}мP/o =F1hf)umCc_q9c޽>JeASڎ*֜R} 'ڦ6|e'{{4?LOX>h^ti62M&֍FXˬdl'l4#,ch9K8Y-z2exXӪMC愷]0؇)1h-Fl8c=g ڪFہ>͑R-ǟ)$jHŦAfytT|*f9QE La:a>꿦 r fxx_>-nW7 |661SHz99)EDkh%3/=/.X#x M ACV|r*Pt,XfrY3m2Tpt0@ h0 늘2oVBXhBLXvV,DW0ZcN@S"a`LY\jrj bs#gxRi;bR2~|TZ#9dyPuY&D7S]BD'۟. TU!JZ >nCܙ,d+6m$0Q b|6@`2(rna;EEGOczE7qq5p-H3,WAq @ h2R]X@5Kϱ*2ܰ lbF/ߟbxtˆZ*])˨O ;l9@pydD m8[`ܻ\D`sޭ JAd]zR.e]:N.e]t$n\=:Z9wm5S$ʁ© 5MqFBZ0]n>y\{8ZoW{v(}Һ/ݍV+Q/wC&}xm}'ޝ(h`zĆ~^Kc{|ߝBCKbGOn[ a8]h8ϼal%Ƽo;`A`s{ Vrf=V{&W*P vܓP,'*DWC$VJ_<T_;fWo*Qp I`)W* MJ5H64KiվP]R"<˨kGC: "9 Pq rRX16[2RD*~Ѻ!®ǵk N|հ[p  >ѯئޱVbAx,"~Bϒeyg:oz.fheKu"N0OΦVm:ARp,e dwXOeV]OS+?\dI*okhh4t1r]qya\aiܗN:ͦ|.:'. ~C4^,j+EY D(ŵËYl$ 9%ѪDz)mZciZlTFS Fk38h`k 19FJ-ayM Q%NYgR8 &1?!:@Dx愑3j&v:ںEN@R74GFbԿN VM:*!$;q{bMrF`JԁTdđaCP< L@r1j8ߵFMԀ!ӔD b Dh Z!C( }xGҁtYk6: q0(38#:0E=)`[̝&Hzڌ``A(C,3I(2kP&( "`b0Am+kQR]>fJ`ț4˧(F E@;k`҇۫1`~;`z w{[t9& ;c1?~^\LV?c5|B7y~ŏŸŲ;NwW"Nx݌Aݕ;kmnֶ/9~xsGiz>\w< jlIg)˔,ǤHɒ&-"6~{ 47L +ܧ~1A<}{w /wz SQt鳒*c!DԦyJؔ:ϹqQL Caf]Ca {$i@o[S N6DCێn;Z5a!Dٔ{07>ctkAiF|SF΢[VnmX+7F6UVk؏̴)ULiu)NKwjtkB^6)׾V4K"3mAiF|̠mGڰWnlJ?n*4ĠҒm6jU:쮪д -X+7F6EsiF7VZ-|*Ӵ:QSvWUhѪѭ y&Ȧ6n; =nBGfJmL:%Kzfڴ3,䕛h#j﹅{QUhAiF||ѠS;[VnmX+7F6%sD79ۇ*Ӵ:ySwwjtkB^6M=#cekxztyvݪ'7@wt|tvJu([q+-+B0Rv<צ'СURw~tՆ'0qp0=UOJڙ@ 5qA:.Hѡ3Eo 1upgaJv ]UԘ)j̔1՘s65fXwVWcnFj j]UO`Ύ!]1 ë13%DWcj̭zRj ]1 i}x5fNsWcnȦ'S?g3sWcnWc您՘[՘b1w5V=acs֘B1w56=A ,`]1 Dë1 NxWcj̭zXc^zrWcjmxjB֮jB1K˓k1K,՘՘s@9>w5ܪ'pz5f)j]UO՘&1w5=Z+=]b CЏhj. OEI|!G?z{hFC ӫAţ8 !(*O^{ ͥ,oѷ OFOџ&WY#1~47(4 Oq2@7k1t$e y}! `ҋ(z?Ho6r^a:) yRDx(.+$@BI!TybѿCo>f ]8: >[x{lq4Cl%ܫ+ ׸|P D`?8:=엳^4͡< 7V4߳l KO]e쇟~m_VPRe҃6oD_7V2^pEUxL<&R'rüIT$bJ&cLaP("O c4E4 jV>/uBrtYyV> aB->p۹Ec3>߻cɮNzG< qƛ5DG\AyRq";إZGw5闖-\rVJLLuHs- q56q"YbE[Rˍkd x=l !y*N% =\ZjSBH`)βb7Z}^{;_B5 ,f<Ϯ Aށ~}ptpO%㲬W d!lM2glMNFҪz ?x;YEDc7ĦFS=+d4#˰.6v4N.f4&rՉ_B_~Azdt _&Bܯy`*\o޳"VE|0f諏3/'#O0:M8#!`8ƚk !YH1۾Ƀyo[签; Um#FPFSAI`sc1jZXA*`;E^ illzO=d"wTHNG `V9? \bC%ӄ(=R]Mt!='b-@ if>y/L8h:sù Ίqch3JTvx h8}]8[Z"K1) L};$0Ak?*ejqi8sxGAq͟ɧ>bm^TQ*YIu[F)I@H)*x'+Rča q4Eo|dV8U^̍gj#it;M=z2E( E61(JB4'̯|mNT.Xg^i~|8~&Hu3>DDp|OWѿ}-*i6kS#\ivG0Hp}pl- %))EHJ >{ \ ћۏ\GDo$QQGBc=Ueu8CE&"CL$4a)', &Q0L}15F腯=}1Ѵ[yq1mػZOK7du1ҊJ'yE0 @s@&lxlh2?futIojؼ7\6Ϳv9GRzRO*qvHS8V13^Ě:;±tE1J*.wfF)$vI50[Ӵyp"R*3H/A] s̷DZ5`DɶtЅMJd풶xUW7v]},]|[&!wqN$c}Ly>>RI:v.bdoE3͋`4/i^M|.qD aLiC4a:q0#n '.q&${ ^Mjt.b7fL4mUɚZUU%[CWZb>T y0.J/K\̻&0DzGJ*o=ILxv٦JQXyXӎ?zd-$aܕf -٧WX)g~qO;)0WO:;Ϝ(}x?lE̊e .i%k߱Wy#RE & g;e8!zI8Z<cg4U%zUW1T},"e=؄R¨W#L0BNi )^Ia믤ƤFrF("bϊc~>tHCN+Yu'O1?~'W]edjXaMBNyJWoGN`Es=b$$vv,\_4ud͸\J1"U,/\8vsS `y99-p .0\|Uҏ8a4yfry1h4ƳF ,Jjutuv($(h|`Bf%0&>MU+4ky-d_R=f"oljg? AC^Is{lxS|`rco&a?ZA;?2?)0PB_s\+Χ T] |Rr0,~ ܯzW6Pm8_s%K]ҪFe8o[|εWnK:[])&Sxr|Atߣ9TX y4zf($V ͭd>aЖڧJ:W P~/VGKޯ|NWVAnsbdpΐ^{xB¢sgW`W9s&#hf9பURT&r0&g<>Qcb@y?OƨK$t H=QX5nsFӮn-dy6lMﯘ'FyrXǒ.z\!5- ch6,hwb@S]@ÍӸ 5Z [_0ZU'K˻&7?J d/T*.4]S'khw_V] ΏF ǘg퐰ԐhMsl,5%ՆR7}LMޡJt4[l23lGtGٷ:D;]by1azcH+k{N7q0znF~C4gښ۸_aekP_T}q|6O6OXfr=}#RHC %{;Drn4 t'l\8ޜ?I^T{,/z5(r:\DdJvSW2y$:cndzS)nZS[hLqk8i7-~ -);FvL/BZ甾[~Dօ|"zL) |d8۝xM:JOptZKfb2Ui'-OPt"C)3g3}"";_y>'q!LNN?ե=] }e"OߤIm8cef:le_QaC΃F플 X]^:B%Gmhc?Q %:G;<汋䨎 Ia-c1^(dmW~T"]]؊S..(|9999Kn~y[x H 0SbʲKD4ac0UX&m0A$t[NJg&f܁& 5(KƗe2%XfJrY3?SJ^ӓ(di˜ {gzNcJJF_@G VPŵDjmVxz+/1jtqee QR0Q#BT􀗱AHu"lťT qxJC BT'2L&MaK-2% M0S/*mAY3Xo)KXiakAw#l;<{pQG6jIxe.n/6jœ͎IkԓYԮ[7i&hPWtgSa="6nbgԷ͞](]xwP0zf1T p()[/ZRL\D2 b $wX}zlQ dYEWA\TLO#SZq'<lQ==x|{wE=/&$\?W-}&x0ӧ'1 0'+Xu }q݁|JcS"r2]xTֶE y8^5B x\,S3X 0KX$SEݙuUEtp"SN)U,?%ifD&:-{8f6b5QeO"9lA8t { 8:@1_tsa#("=`rŝՐ\|rzB\~{_.f_E混™gdRLVxXN{1p4kE!/EH%6] *ȐL cv^2Rb6aS|]q:XՆX Z yP! O: TL:͍ TPPIXJxdž1Y+ggT77@֠P>kJ CZ"yO so.Z^|ͿjQϟ(Ő&b2U'c(HN>X2.7Bx Z>_/{:c_%|lԓHG4N67h14ޛxHԔq%v4^Yƨk$'apTKB_*$cRH2޸yL`⒊>Z0wI8dR3QB%YqoD }1 ԣ"BS@=t=tOQB%"J.=axU(A>xB3P4z#%jO]$mzsys1kpOa;(7S-T]PzzkSRӇ 8;-/d&8[+o@lc~wjiw9LVˇ;ۚ[,{ ZSI8.`ABYسʛ)zm}OhƂtOg~9t3ɥӟ_}+zq90~y> Ftoq 6¾y oN¬'!ۋ?_RA/g+Zw#c~0߃?zYD3{b$_e ֵcadi Gԥ] _`Q.z0q&ppycfPsj4SKivgq51؃"K/EifQ_.E}Jl@9Ze: n%VbMXI ON~CUn\燣]^(Em $ f .q#mZbMvziᒀފI|F{~ fhMc86f^W[]ƂMBJ΂AYXe' EM'c63":W9/gb=^_|>; YbDω`/u :1%P1f|<XO6˜xuL>+h8l^拽ty!/MEMXB nyV+vфhZ-e$zsFE=n8qH:#/!-D#i?o{7uEpv@YMs?\R]qjS5tUg(1$)VB@HMQzsůn쭻Q9$Jx/"MKd<U$zh=wNoIelCĩ:^* WeҎ1Si\\_cvIr1yYItGF34'nEeH4}:`DPMicق)2B@'ڨ? #` 7"mm;El ɰ8!;N6(cɵ9j0۵ lM`S6h& R#h6 6JJx+4hpqh/`O8mۆwx,n財}< _2)qs-ŕTvm,~),R$ƹJ$qPAMJSݍ5*ۢ IXv=-߳a2:(ƺno'Vrۂ=!cb;Ii9?k'rC\/̘7wO> 샷5J sW+{Zd&Ar񿾸v9'??yQ̳ʶ&lpM{O_oo]b}n~~D7m>-JYs mᴏjѵ<4JFUFՂfV#e փ~TsFvv,G~b}?9X68@`\)}fz/v8:}/? 1&xXp0pygxb) /<%* ?s%F UPc5Gm | \*Þcl~L؞TvX%q Ɂ W熽& 7Rv{R8OlȚ \@W.,sVei F@K הkɬP6p`B2d_(DZ9wfs<~7ӨG-BNb ;B/ Kkͭ-9ڈ#K*=9d4Hu$bj8?wZ)!Rz \5 `|P4]p&:^n11Zg@,9׹elc*h)l82ےBU&{s6&1JP 4+wˇm%s,s6?yo7@Lg~ȣ@o˥_U?h>Dݧ/1%g+ڕW*Wj6>uM䍞,+7ҵ]ʹ.{M-$F+6](#l<+hwBrͲڙFޭ\gBA~wr 7ylAS[MD2 ;L~XU~NFszfu,ޤL+֗\^_\_ً/yBE݅ĕ}9 gIXMJ!xT0 IYj" 9J +aam:t!8uZ6gtJA>')A zH~Bo>hаx.8|S*&v~)W%$:M 5rTX\p_eyw:1jvbyv57טp|j{ǟԣ\H5FJo&QOr?ۢbp{SWM$BJjSSRWJ@M"4L~jý-wwRqq^߽L-M~T/.v[=vv*pմ/.Oﺃ,x<ûd޽ya67BjG9Azn.smq7o_=\7ꨙ7c&C꒜t x]#Rpo"$:b@ G \ް 8Mnh DV8-!uRQ* akKJ,!LjM["q6Xч? w+ `즶5;ϳW zM3FK46~κЄMPK_ǡy/Fdq|ۥJA( j=у}RIL`k"ia8ZAKCxI@YGsnjs ! TNVLj-7r[‡P0y Nh&A[Ryc΀ g1ss$E2h F !1h=Ғ%Lڤ%^W8 q dN/ᠵMs쳯͝M\0@K"{$Kw9raί@fŒhM2Ha8? d<v.!@J%NzxjdM>ѵ<^Z ӨÙͿL8|+5Fmz1Z=$8=3ڇ⃁2\LлĄ-F6aѼ[~GwBrͲ)>  tLc$'*6v)+#5Ha2XpA2c`Jw #:Gq)%Ж*FLN͑E8P!`F%-bh{tP-ui[lJEw枝XF:ry ڻm0ʥxQ±= ͕N?zf 1Aa,ܛ7z^.c |t g`_*#6(&({b;U73G3%$EtU*˽Oۇ<&=&I׏/}Ur=vW*)2;Og+W_8Z4@QDEԠ16iP1;Wp_R*83>HR: r\8(e*srᗬ:Jϩ2A8Kqp}6@:/)z#pBV 6 V9,P ,T'k>))ܵ`uKoY U^0_AsjժDJvJ G!Zfknh$/ HbHHFZFƽgrh4YhhL33He=M"fA@3X<{$Ek_E?JZ&G.%lSY ȅowarUuvy9^QHz ڊP6Tȹ+Naa)`My]At8h tlc' 2y$IIDR>F"RK!OB'T83/O<493 lxtt%6aYȘxAd6)j?3^4b0+!:yڅGYsZp=xinm%U>#ظlO7tc0|v; ,:G'G0Ad 5>fv3;xIp]:@Pl1ߋib3spEDågPƶ+ 8\uۺ r|F1JBDz͂ z O.Ɩ~ (rɸA/JVݾn;P j rMKz AZ@n6SbiOzM{B S>mf̗#9e*lFO/-4*@1rn../65__WǟpBJӤ^R]KK'!E"0F0ܭЯf7IJ+5HФ~ |<?Vϧ6:G7,Q^} fu9A/s}mOxc[תGƆYxeKmT"j ?~&jMzhnsp}&jݺ31*wBڢ$9) Z~v$ ArzSd 1t@wjHңS邓Zf=bP6xs Go/_-6539|" 3.|S;!sd\ڼpc `cSZ#ywKθ5f4XW:cLqBGzq0{"H.嘝yPL,LIk0/yQEa& / f~˥]㺞r@e*4/EG5VK=n#g{:5*E-~3VY_,cP' O ]&.μ7NsT~5݄aLlL?џ i9n>k;X}_c)#}7RzuCJҳ^A }4nBjBm^4sWg}BDɨRl9sQ YU.h'&x4+knHcP#=3#YxQ'rR߬@4YABRH"/̜Bz`{a{ld lտ|g!jF(J9gܫ$Qo>3'G% 9E0~\n Ɲ%I.I"8pT㙬 vD8tsr, .TW %`Ŧ-]<ɝ^ЋBTiV }`j~;}y/zH"Njz79Jhf(K8=ޡ !bckX*=`SSi<Ֆ+*-5}ioV1ݖZїVwjuWZan?""S!U>@KM{fis :b4 WvAGAI*ehRC Dž&p` pF6M$x 'ѲD2K/}uE֠#R:ӦM9SZɦo{%^6z\+Y~E>fѤC@遱EIIsxr%7%wkDRݥ],6mxƫ`Bd{J rx? Qig(yg/[JCW=:QSq@~|薑?چ\qq6h?Yhm y)zfL Q:Y0fC dQrxQxo9]TR(4W3K,gd YXO6|+=;I5`6e 줫u&t@0U+D"]AovJO>"E;^1t?LVo磟WQJz|"ҔZ^ߞTJ~y/~^MgBa4/} ܻ>Vx@(c$Ffr?o hH+"A?V$w3LJJZ3\⬒ #VF\ 9ǜCfL0c"k4"A1̘3NKXFu@T9YN 捐t#q'atX@rQ'Qԃ+Mύ[)Y=f)6P0v F gL9Jc^y!j)ltq:w( P!AhTY"CƘ`L9,3$Fꉎ[[v41.ݥݚԬs@pN~3ϑ|pZw ^$+Nj9:!](1lGdjVLV9@BR=G/]3LgJbf'J,AJ[V"-}9lDD,E:J$^qi NiFSC-‡94i&4DR ρE4X1MaxkpMR`H6(PF D>\FN5UfNcAZN xa!56ւRt"Q.g5c )ǢjXfR^/fWd3:ۢcLhߖ'J3E;V&3p˲N/IL0:Kz/nLmD W=kz`JB3|MK߅,xE?wvyh"ӮjB U8OJbFPDͳuY~%-E-&kDr5[z2ZjD\R)$lU|~n\'eƋKKkQ8˩&`޻ U9R q-9Rn;^ezB7X8INnB{ |qsCqևOѦzrZMq Mh{7aRa~o?).}M+,鏹^d A/|Tzcp_b4Uݒqm$SJ%Yn}nM1Ϩvt2mkڭ)Lֆqm$S6qh7AJvݚb#:MQG2i3[E2Ť|N5ۘaZc;S7c/}qBR[ ɯ'Q7d{C]@`J v|D]*z?o7fdRvp &uf$Vft1] •iɼ mJ>K ǼnAN}1wFCnwY+K(KRq!Yv43'>mwwjwrzJuw ǐQ]ze`bk:i6g|%z/н|s}LbgGSXÒұ]d ܈))/N-ՓMBSڜc }+sr+ ޛPJJ;,ʾl< QVZIp`]˯_ػ@d ?7Swf}N7ZƥVWo܌ddm5ԬstSאIo c;'"+ k >1nA )XJor ,2 e\2ޡ`&Iʅ7WinT%UK֥у#҄X$\Q\1U뿔xZ(w/~+@h&JQdaRwCW#A<^>ӎMIVJRQWK\6w^j!%F7X3w6<`ص.nw-/ŰPC= -PPWN|:5+ҔX*HgTԌ = U~<89YDxHvW,P~sp my`6~ҁz`d4"[]|(+7w*nX mԥ :Oš/&_ Tw)A"MCIUSc(3X: Dm1Hin8pM#ЪShF*od EA ϰKNZM<9n;몷1ī -~י2sUc0].M|&w=T?5F,^lxCnM>VdKQ@|EK͙izL͠-+v=&Ʀjz[c~e `I7W9_XR")+/D*2UZb*|MlX`pX 3?8Bדiiu3+Q.4ʢi+S3/ع,ʜ9^E'IDn[ 1RRsvxpQB!1QjtRfJK V(}uE}1{TRȠ'* X2Ŕ&uc~xl#|N fլ/ ( & ?mJ9FEտHiW? oF;L+Q:"N)sTB̼5T K9@_|~*xq<&nEK{MtڛubUB}fB7~Ś .8/|хHc5#LB{V&|$x~FdR]m"SFd1}p1k: B6n8fz!okVt\t~:gJYXgL;eG9#eL1?'>0)x—yx[Jd8Cg0zv~i0'W߹x O-1$ e (\ 4"?uJ9勎t:hT& [uZYY9 -B9ࣈ@0''Elm#P?RIe8.+Zf\Č`0AYk5%S+y nX c+F@T% 6 `'2 7ڋh8U܈H(p 8˨ XsD$+IzhJ;U/L_kB"5tDGyXd*laYb!,/샳D Qp$qbUu^d(DZE65SAp@(ð,k Ta}W֙$A4QxD%lIpC3kF+ 9NJ#)pN`} u`j׆qDOH3nA),EY"I*Pc4< ZJqmrѬ0QGF{>4c<XF$\9穧xa7h/!J53xB0hM [#9j$ (rZ#Ji F$(M=+s ,fΒq A5O(k(s'pQ4_U‡L֒-H%1aXe%Mˆg2Ab=a>itKDN$i7w62oTwbYn] !D)&m,;"4'QuHWFQi6fY||"e}T@V}|ړ$*'HP_o3 O%vdspbdf \E论?jhZ֣&ՍG/nލ'onAo?؋d&.DLڇ=ʳ͟xwT3G3OL懃u_ WXtM$ƙ_{Bbr$8MB?zBn駝L%ik;;oړ8&΁C?t5$}}щҦ%dT.M6dߞH;ƓK.]5b 9yYT;j1\jѳ{ Ӎ߭Z?Sʃ AQVo1B 2uK],?ŏ⛙e,5oPy,i6/N\*5CgDr)?KK(f߽MD!|D#38,ڻbA?m%ҿ#W7=/f96O7=<9;2JU¶ӓbkӗ^f襥ګ2Ԛnzy9) |v% 9\ |pUIW1&:+ Cbct<jVκ+ ۳[ vm|2|f<^0nnVsA$*S9!jG]<d*GB ٷjEzQ/ćh&i^Ph[YE+e0EZVGvRS,g@z ˵2kgLGWJpbPҚ!YcyÓ+T+v̢AК0[uU;75bzDm=F Qކm\ F{e Uچ zu=jAvހa$̠h`ڰͥgN޵EU8K." .v,~g5z4t1~ ;Pَ.=39n&o>y?CѾm #wpӀ\VVqRGi'h`L *"H Q.1FVTٖ`Ж'wVxitgm^U:ZL =Q/V1#ž3Gj.Qo7=b}%A+Ewy9W"%8bl3RpY\taJc#pTJC.8#g@mV{<< 7+)V}:zހ! u`;"R=Hhނ>:l>/}BYSx7$XM~|ESQI q*A0=KeNv2ؒr53*y[6.ҜlJv|q3˛q6|6:;=:C@8>lb4WO4&(_\kWr}L[=yQ_7eYo;8%6IkZMJ(UW"XFhUFZҍ+/)^\ '95+5#x^ fx4t{xEJ+5Mih~u/-O?` 7=x̛uky nf{=rA#v*ޕ_$U1V<[I0Ey>azR)x gFXY|P"pnNE+&5p (00@3эi옉IPhnBFh!rY&!>c)5:wS.K$.)ۚ S9ݩ߭ɻqQ# 3z@BMx!?5^h=Rkx!p;4vHm\-F<xUSHRm.rCvvu{J7yeI 1ى7hB?pOyMQ:šM"fy<;Gޡ3G͸{T^Ml{fF鸺<;=q}ݠLxc7c7AwdSjIH2Os[EK*/]gdXӼiFxen"nA_8m6oa2,{ܘDr58:QOb 9ѿ36o.;NQwY'nt6z\+R{g7h(6ˬs4;87/#G9T zQjOI%)qzcߜ.<ϿqG{!H!4Jg]!6|rA.xOH5ݟ4Z72MVsqOR3 SE1{XIWߖh"DIq򷫿D#σ&jvUtI]|^ ş7f KFD@/=^>h`;(t8 VI *Q ^#cy-v3lZ+w)kWYn7|r=(1coQ%M51ۏ6DcFYXi&2SwVoڜe|p'"ıt<(i\Fs$]^XF4xYY8P%"I8+"2Tk-2ExXZ"o[ǨuӮAę2H"#*xA+RqxYAE-l ŅEwdD勒ֆq"pUU*TQjPNrW@D|%79գW|u,(%- +3o5.<U„e$ɓņb?uv ?Db}E?DZEQ_2;_/?pH˗X4NʘU&]'z.8G~7~kۦ6ʱyon?zmaYn~r7gF%^haBj-:]-6 #""ڲr9:u>ArCO܏X݌ #h Kj4(h9/بy.>/ݟ7YMCN9Y԰U>f Ahc#x÷b{.=_bMvLC(YHEّ1N?m[ EMw^/>M\n{3EԦYKHScAUit]{|5Zq쵛߻A!|5`J$c $ -G' bYHW>;+1EɺQ4we"km9.SwUkFO*^o㿎WU4T#gJ$֦5bPl%yNy6[cuI=eiPB٪ߵJ H~\ ϼRRI^ ObZx6b,r{\ .>SLZxJ\X!6{xaQZ fXF(s𽨁R۫yYޏ>jU(weGUEyޠ+@,9C+V**BxuRNu[Ʌ}͵H6bHD8'q#%Ad15V'!`؝-WmФVvunMr&neU?E'QPQwF3rĎrO|=.DpDκ@%>\Iz{G|+L)3cֈJH )ّ1NA0f}-nOra\SX2} +}\h-ZZ8Ϋ#XDXAďY P"r[ LꎄwI'tgd4SMY5kU|; 222J"@L.k>,J?etPh䶏UfbOc0.x[R{.GZlFp]DNy9"^֐UQ{w}/R GXxk\K@:JnV3(z5\ԝd" U#e\ ,(Ӡ1Gؒ)6憎'Z4;@c%MeXk5a^;fY6e3hINܮoOf"@-dnq4 u|tca<!5OR+J$]^cZm_KZu8I;Qq4$GQJrYji)m6@J4&9Ц'y潘v/)cjP3;12ng+``JEs۝4jl.h٨9MHxb7;UeuJgq$;2Ɩ!Ap)״)6<ŵAEvZ|w=rpq8ϼuPr2pXuWj㷝|XZg]^Bis&v_>4 7"%hgߘ'9"V$7< 3cD=:Usc{ 7 | *7VvJfL9tFSEMv_fB G$Xdr.fRRMrDOH=@;\3uI%O?K5?o7_E}>xwyKqoęWoo{rb2usG/[oc3lqCMgKX!`-j&/e\\q(y!K /U\w/b_VumGSzס_uەf}Qe8L=kں䞅W7-+UU;XkzZ1$Ԟ!c#n I@(]%f3!3C B7BMz]DQx8M7s܌H) 0x#L#R&jH܏vcK>[|<"Wtv~̖3 x<"Z`W2L_]T?Focl !BauB Y挽Fw)V6hF0]RXRG#cK Z8- 4o|CO9*ZQ:V %JAk{:VߛǸ ZqN! VwD+>A3N( \ CR!LJf~Jy$HA=8'}4;w~cE̜PE$MZs6_/\iO!oQ!3&`.3o̤ԌZ c EƮ*S#8sR ި՟q- D5a>(t3<'5EЍª_)6pu [wC#i2IB#: {]XsEEh D*`DXr#)vdXM=ZZQiۇӢ13cv)<`dK˓ɏ%nx:XMC<@ks= YʧB7F$ir;[.hBL? ?;@8{yzy2TD*18؃I:qʒ[e2* q9Gò,xX 0BK{3{"֚̚sZxؘ^4= a3EWҳsϴ,+!h_>2%7la2+KR{^0H,jE%BPc,[UP7}cKCibtB =K_(R5zBz҂vP\b*(2~X @̕&ܐq!A_)64iT3jJq!w hHps}qV"f{ =ެ'V|sf'݇/d!lla5ӎ# eUw6Jvd7{*;o7LkA.q qpIpZpBCZ@FsR'.#:~F$~4FrBX% hL ey:{5U)Kc\@|=OIw=膩Qyly]ލe]tꝷ~ -SRQ-JVn̮<phZ$naڎg۞6#h햼E):L#Qװ%P:.޳H! ަrKᯥ ۜ Dj3v\;ߝN~G<Ӂ@t7+;#%[zc@lTóRv̨Y< Qبq DGب>u}+-OLq/"pUqWF+3b۠6"l@sI ҊL7M3BoHOt%<ƺ /#nEc D1c*Lh#):Cƺk-"@SI;*;*r*y)\6=lQ2>-`9+Ȕ̗d 1vG]<o=D *_>a5O}#RՁ,IqpkDa:#8 .y^HJe٭Ѝg YSKW+w"`c)ZI@5[  t2·{+@.8!S(H)e97oH KW* T' RLzr՚c{C9ӍZW8[0QIQlSnu0 +dqE(n*ܲaK;?]%^<=zꡢv9Bf< ʯ?{9%␸v')',)qIk ks.w˰ f-,f 2WsW=77Min[[lWig ?mMl B8:p3ⲷ%y[{Pįን{$Y!\1Vcw%E>ukP>gWAz{&0[Ƿ/S. Ke0Ar]9q.Feu)t! NkR*eٽ-ݮ:ov~njZ޴ yRW%xNwlG94D%ytgtzQ|W6NW_l (>zp/h ϶UuuV3毖eiCxps/3CA*wm_Cp΁!^\7V[Nk @gEr$9+dKcSǵ)˝r&2I5cgjj X_}3mcO(qhj W#TZ{Lz&%ⱆ,],3JHۍ[Ѿ2HŵZ|vTq8:^bNqQ2_?<\屖Pʨ9h`l40|BKxVX N|l5zC/Z!2ݝ֖w)AUCgFY]:JVs4)59ф/;fd!{G0BEAH-&h[֗/(ϕ d[ 00by JXy/nN'upYxn_EKk;7~]0/QsZuʜQ) \/”4zDjcJ5cJkikդWP,hwo֘w]6KRF]Ai &/)mc"rʒ` DBh41CU@k?ms`SmGj*5r]j?d! “:aچ62O/y?=(RZk>h.YH Pc1*d P *AǂW*B]Q18-5F n56jΆ X3-Ek@60褑ʍL fy־2b)T[ΥBw*$)BI(UMBmTOaďgJw.} U!N*6BX6n j>vM 3?²Ixk$N8NJ/f ټ{ĉk3>T}p=Qw>rwxD";Z)E}-kV/Ŕs}>9J\ : 1 >(g-A!`"Ͱw q #x@ \ Cؗ9P㊁.z0goo6VifJkrQXp2%cQFzQQ3o+4K3~dGK y%2:յ*taJGԏ ]\wPLΩJ{h<{^k:5ޙD7hCN`)ETzJ<22*+(r[;WmwJN(DnV?Ū P1cXZz^Qu sALvXS 9F]`9$Z PCgOɠCNL'u*m+.}ދBWh~֗3s9w.z zvE A3:kVp.Pk";bi΃ƒZF94_mjzۃY &l )Z#NN2G6dxfUp t||A0>\"k{_a/ ӏBJ{vgM,(|ifqso:;흽ݷ9s9hL7~^/Mω; 7}}y>랷F,?0֫f lz WP? .hS8ňMmu ~dq0̿g;uY^/~ ZD;s9>>9Z/_]jy5GD t^q_ 3O*|(%6 |xzz׻|Nĕy6<6Y1 ' 'cRu6h8q^!ɟ.^fr ImW=W*5|Rcp3'RoYqκ݃>D*!B|T.6]>׋ͯow> >?^;kŮó7˭a?ҎJs`(v^4:ӝX&[\A8^ls8'YOḘO%}z=y4{}fh|7/}DZ׌d0RB]d{tb. gWsDžIgN+:2}dʖ^g ʵVZ)}53Y gi)%C737ǡ^LNZOy[/QoȞ+@m7#OcH%q@b95MD7v-nK->Mb<<;- R7-Ҩ?^v;sJUɆ3/Y=Li% ɾ,fob3}eƢD=5-\ ZP֒i*tD0ZKB5 EZrraU!$%Ȫ{*Zׯg1}}JeWMq%}*儳:NBEHpdLR/ (fڗAHcD 牢EP\1(0@Qc"^Ez$kXDMEF6bF@~7oG같u-MzG%!n&@}2j+ ΈvS PVA RZIM29f q'|iZ꡿+'J^`i)yFHtM<7F ÕȐY⍰JJ  04$p$[ xT4uJO5eXS5%bM[QJ_r&E7Z&F"c`D<{Fog<& `t-EYq.Xh7 `OǍkbYPp$r⏬dnyJ?r+9gZN"$`GIUٔ_M9'I*)'ҝ\ϫxI)&WQ %T 6nʉOOFB=W'q-ׄg efM | Y,WZs%0PL{oaZVE;o?v-[FXV]b3"-:NReTpL%"^s55,`eZG&5޵u$Ba](շ*zX$d0;IqDzHc;ՇE9Eh(QSU_ץNG^ Cs.$M!B=?{iwD5zgY/Vc>Ac+i/f? ͢0gY7\r);Ťke9j:dsSba)I6w[a_o6WC-uK֣ؒW8O|싺spS!,MLOgbv^)[Gzӣ;̫XhOzu&G= 0 ѽ$9=\y?緓s6>D,ng4<(o)G3iLxbBhOUP69LI=Ε3#TUqsyդ= *+)jRJ- qH-%gp:6Yd]CrSOGjjj[(FGUHfl;e6{*5/xr|ߤ=uBㇷ=ha}18&OIZ!֖ퟷ,sCZLՆM1;r/2ߜ Z?oaOt3vn~pr̷[3+M8\UdjXV-31Ϟz># e6uRhl!ǮbE> lB\TKSZlPܷfILE >)[k(k@iE(}C{C}P8䭟[Z(\ [9Zµv h Do-E',7_ٰ\=זf}Y9 k}i5Z+ږy[֍tS=_nd{>yAONƾڼ5ʁhC5:#&i){UHqL$5WQ|?ҟΐ%Y{W~Q07IoO7_˫ӼKO8}d wfWB/ X" JN%#ʬ_ ,|I>GpLAmwDt fAMs=oѹi6ׄEj|J?H } ?20Gw9faY=GWC+H񶏎Е?Gqf)|[q}2Ꜷ]=ik;1٪_u?퓯~g{*߷ۼrU_.O~OL۱?6ϿT:ݜHկ/?ξ$|/Z)zƚ<{wRr-5?m2GX5%]ʶ'&ɵ>oc|70yS=Ǹĥ1B{m,{a`yZ2g \w#eQ*1JfIL FWfýJm-Oq̊3qiC_bG~< bƭgq:><fIjs j<^IY ; mSo jhXVcgáfm4yZ]Ͱ'wBںVUvdPiѡ-y(PXr#2 {rem3/aX9i١ ޖrHT NgP9\ )( G7Zd#(cJTmAMZjoJo7:Jo7қC6Ko:(](ݵ7%-ilvтb + fh%P{3\s ?%AsC^n;9Ҕ jÁ> X=K=Ŏ8joCmqHbGRIB6׿_, $7pQI>Y:;9JFiivFAEW sY+ 6ٌS4s;mk+CKsC_t]=@uZ)L܍صo쒾_XK~ qWjGwgVZ_ǻ8U^g?o0К&;UT=v~Wxb۩niq(VP@uBKVA]UݪZz.B`תu4^p9T-|j/:n8i@-g-cjVb84L,(Y b`@fRd):T>6ؒG"68r$i#/di]gWďUi\0 woz}u/l|=vJ/ɀ{7g3}#5D$d1>)jφ)ld`%v5x>M؎qbc줠Mi}̤8UؠaX3sM1gD}08FC!)1S9QyLiCHipdTPbUz%BsjSRVdce]5B-Ϡjf/"<[)GW(pL5,)6 *a01z~ajw>U|{ J7pmKd=l)5m Ғx(-'Ѽ)hR S +rBI9'vUaB:0WX}6d秦Lbλ`]|s{y!Ѡѕ歽]@M*jBڛ[؏}P2[d$3Gvȥb~v=$6~6*#dqƘ;Cz^,Wmc غ8ޮ W[@^ G;g FB2[;cb@XuĢntFXqDd=ޱkQa'6@%"gq2)ڈbsf'PxH2ɤ)'CRJI5i.+;[p\"P'4kLm݋Vq X8ld !صcPl$oAH zd혪 zt`Ϣc3d{;zJ^-ɹ\an/g6,ހ;'3xXQj`c4Qg^GJ-ܺ#24]J65(ˢvRnԤ~ %ik)KF%+7l4T%Dⵖ|qyE%cɀ[aG@++e;7zI]hNjt:<҇88$C!CT1 匽]TɈ{ލ*F] L&M3IZ#0J1숈4Gr ' a(LBW$.4 gZ(XxLO`:Am% V=;34޳[iq@65a[I( N~hB}ʨ D"iiotm႗I=lDZMVn%`=gm_O2zD6ݟߴޥ?YڗNIIcV[h("4"(n<Wʋ( hωAA .YMf O.x",hm(3E8B17ֹu/.nţk+u + 47p[2MRB!he3IkGDZ22 KlTybD%gt@PV3"KEύK{X ,R1h&y&4^gP$xYS_!Q(# E-@_f|:^{J EZ+)2 N"^FŔRܫ;)HN:@s4]%5xǚ;`Ti)$x3EQAW Xt]ф;%uKT/-M 22F&?KNsl*ߟ^GwbsQA͍*R2Q^"Ȗ m-\[Oи4l܎>\)hvA3WB6)` JAި7W>`kQƤdBrNoĥa&?T-en&(qNqEBR^fB.ťB͙+{0&T ZN&B6Gz;7cɨr"yO`aitvf2+7?e7KX:,WB.]\ePf@(xя+۵&,ܟ+d 4ҍOƗ%=Ը2^_Yq)=dҔ)-N \|Do'_\F+\/|C"0ݎ-_??;;u$3{/1D=zI*鼧*?r9[/gÝ^}|g!Zw7><,?0Z̯Q܄$90iԂ:[͘i yI]@9~ӭf-'ݟ=ͷ,Hg9n.˹=ck刯y|o&eOO}$m)m>PLMk!O7|] R Jh>Y>񫶄ʽ 뚳D;bi;LvrvBI'ንYҘRǗeKq7ճjo۹?jd&w}dzLsj>8@pA xxy\A@^K첮_M, Ntė ˭" n0FU"- Nrr2 ' &dlID#+^\堘xӐ A{&+Bƚа׋o] fR1i xsa/?!Wci0jgͱէ~<`po:#Ai5 SZXH6Sʽoj(N ^!?Gʀ<k,&oMMhA`O%MОpңͦML hlYu%tZuRcq#%bY>U–!ET^X)+\m,>Rbh_ccLrʃgExBLu/.:/7!GHQҁR7>u`Qa#%ŏO@p"GWx#E< ߎ +/}0}sV>FR0iO SJUS JNq>d>E89:1 ،)|)Sb#ҵ=O]0d pB,| 99irw' = OeXT8oa^pmsr6[ sd ˯?ƻ~$Vc?ė|v?ֿwSCV#adVn80 C S´Sx)=es or`0A,O J)J=09t)Z]e ]:@F)2UQy ^Q|*Y2*Y2S Jm9`@M›3!RR⥣28Z.Tip]oOZt!޿hr{@wϱW{* {<ȴC Cn9%%^dKYMJ,שF7+akͮͮ% {x7#oO rBkLC76b!Op)1oÞg1Pv"*w mNʇcXQڄׇ 9LU{ G==$ؼ$0ǣyH)7/.(;in**gZ*]W`uU I$8?&O y_=(Cۡ$CB)v qg`lOݮDAOt+|^`|I6%Y.ے77]۫VP)S@6fY@rG/?44onj~E4A )v7wy/\Ym2Ǒ'-?GonVY9A'-U;m?F[zg1 2TZﮝ|1sĒQrhS4'#ShZW~?b%]fz.|,C0A"ea$vjTf!)~e VGۑ몵?e+_?*gs_y8!BBB|(8w!b Dc<7 I@+u}U'\>ܥY_D!1w2L~l~lB1E1UKP9\JaQC" oт ߢ?)%Y~$ɗS|9[D`()R:,%V(n+\@BLs3~O } !$ ]s,<U2 sXt5SWyigkk6N7}ϵH|2+P9D3TME=R|(̊ %e}r]Z%0^emkީNi Gd VFt.oPc`UFvt[ +j[*<'0"Ti8}~UO1m$@s&K*e\Ra'Qx\"i%)\80[ter:M&e*cM BNR\%o/.1%ӝ4a+"_W`Xcn6.3{x@;+&IK%!Glkb 4i`<.Z%]MxmYb2nOZi4$,]@l/vl5v+ !64?y+i)%5IH6*CS_'OaIW/MM@ {F֊h;Qܑ]b ]D0c)Kjrb JJΨMf!RҊ=sn+NlfQ砱mUwmX[:Vl}AC+?BdDPVV5'gqlgʏ(i)%G/V;-[[M޳\KY9K>]\ &"߽ǩXHXJEii;Uen` }kL5V#.<$MH8t[V-bo~{!$e'.^K pqX8b'N9Y ٟn}f)` ޝk| zA'hw4FSĖkBҦ Aǰ}ZG<\,p>)8Vj +w;0_Y"Vٹf+۔NxY ?TuKujj~- gj1^ғOF q(i6ay!l%A @=ŭEgZлnIya(筄e$mŐjZ j="!H֨p<ƔC<7p\"#)C1ix"xF3 {6 TC6GZ!J76|l5HFj(Y_p"lQcB!4`+][h$P]E֣sThUژ1k)$D[˥R4+PV5Cq+SͭGj> <_vo}Mp }x%nL+!yF]P+Z~.YS v0HK93LlL߁7LUEkYo]$* n-5s8QQiPQM )k2"q rfu׉HK}aQjվnpQѳ7?TYA2}Կ&?VMN?;u.80öok]b v4?:g! ^L qSW=L:Dh|~r=t^O8^4f?}|u`( eŠ .Sw#<уԞ?ODp:Ϛ@W;}U=LI7ϯ~|=ު[~I7~=Mޱ؋7+W?1yś\=WJw6&~zxe-*^F=Sƒ)'Uy37d>d?g<]w/B䖷ڙ1zK.F{᤮[^=ԴS8SR8V$Ic TvsƆ2OJXpF7͘e?Wf3v+OT!AN&=6=**"܄zub6o2W'ʐ饾ג=\5D.P~7AzQ?)6Suw>f*J/:eEU20RE$By,]xx]/FHkoQ/u/~y/]NsyWQ7C#OOUcˬW6h'/y4&_5t~J Y1tM=8|> Fk_Oߏ5^|~|ңF|z(jsJ0tRxpmmm#!3f'#IbMK*2Ro>vOw$].qI}Py0BH#)]c.:e\j{܌INlNW_5Fqb3a`6$etovpM tx3C_jxdC_%>`Ͱpܢg'6T>_X04 ;C}z ?~,Œ7敏iI<#BBhܻ/mb53 j&.9~vǽu.s9Rd,z^{Wӷ/7SO+滿>vo~|1|̬̕_(Ӡo;?=z<3MHk^6W?}MK8>7 G7P쓼Hx{yx@௝c_;:/f믧=͜{sV@F:`~sGm͍|ҲK)Ýycm͕xRz39k]G6zl#MN%rwǺ+=mCXO{n[14[x3Vnpp?70E 'Oq6~=̓Ւ뭐pkn8 u/Yh==0U)KNT+,;.yc R== x-].REc랞{g bU.fIx A?`݁[#4\{ T9h whC]%/ Pdh=NW4rM1+ ]=;xz?g%ayQ Ǔ37iPUB:םU2M4]׿~?IT)#dHFOintݡ=4] ~0zyxB1j|mȳda_v|@-=U*"*iņ#j֥uum%~4^+cT9kL :1.enw S6? á寂bl,/ _'8e=]rF:G_)y]wehFDQMʁ|`ob w,lrE?R2I*4G"< /F&<"O؁篂b=Ͽ{6x7)xC_)z:qD 3Ǒ': ~/m@,B}J@VW!(_8_JJoJYsF>.1a@!*'?*%r2@%Z4Wڰ\zUs2F} Q.|_{jB- U.Zs~ (M*rB:1V*VNe[1'pEѼrK9A 12g 9| H7&,B$rJ@M;Lg+\l*W>ZAŽz vOjY̞Ou2, Б@ǥ)%`{FA2&y,M9zcTӮBwJk{u:Jڨ6K4qn|/?>!nOt@㛥T|Ȭ -/oǠ H,?0__j5ZdN?+O\!eDWϔ['% -#*r o-K6PzR!Y = NU)F:(ĵ\M6C] W=gy):fOHgCl:(7\StppU?sp5J }pFf`v֩?L].&" >Pƾ[z,4lHqBoJ*󀳙ƙ?ds.x#I%=x,%6ɳ6Ϧ\Uk_wM=+8>DxP&SxzYO|_L!.LS)=7wOA4w\{Y}~OP}7kuxVZo gQWӄNBŘe+2XY o3{ʽ3̻(7%4#%S  iĄaf7Mk4t &]2;Eb D8797.:bJ$zj V axeG2=DƒfNYz'A-\ XSl9ytz;$j4ld>(OV`߮Q\ L#ްJAЕ NUbbQ6ȧlA@>bkt%ԡx/Dˍ7 BaA )Xv) e ׊)kUQyW 1,S<虵IRDU)*9m:-9ivڇz]m,sy+>au`H\Iy@.Yަ314=B>i U*W*+A01 7dCAV^d  䑲&ȅAef %MPĚLz=):(Kt̑\]Nΐ&gW2LbU2U$V%X-f[e{zf`6P8\qK[l|ƅ^`u`Z__ Jvf\]/m{6CmnZSzx4kiv <ʖMlDvkcyZg3L]pr<^}ze z:;8ꥧͣv~zfgZj5sVo$:S'%+= 4׃LR@AJ7K=Qr}="^']l{now?,lÇl%,ƧƐ[T;()O=Zktz޹Z(%E=h3Fn^1ȁm13!m|EyNd$Q8g2Y/$dZ@f-ˤ:u #7+& ^I$XɬW>k68gkI-uLOv.h5g%FKmYe3 HEгR>"Ē/JGߓ˛Rޣ&䨛-G!j$KYɴ{WI?(ob^Mb2zJx:vQPiWnZ8Ccܞ!^<9$Y[pRQDҹoNJEɑ`*ƫ۷ebR2/rr5뾯{Z4#R-@9=z( AV ޻՗77W'`Qb=!`(b3^M p!F_otnc<ѡ&Y_|Ov{Y bE1~Pb](4W{)-T5̻-CZ)auC!"_O|?LbT^Rq9A)>9jn¬F{rŖ_T^M ^Z< QZXhjjJJl^դF4r0-a3C 3[k؋0lG7>[䖸kop 45£)iZM2H._ u k7VϮsw'ZrB2p穃:u0z:hMmSS1:QRﱄi[B?m^`COs-WWTDc:(kлSp{!IGGٓYBɰO.i㼒1:YFs<\''c2_BuA *XBkiL>@ HOxXf}ӶӖbŶ=]܎|jz&k_#2#y, &$dW p*)OV ^),d`Gò Ay.ȕ8s0QX g2kPKe f5Wڣ($@!jW5 $NZ}h8\eTAk,/=4uіDV  <$'' ^TK_}aܖ# D)"KnWYjC33#MRѐcPc\%4bCVdolFHF(O1 y ך-L,Et&G1 Ұ֌ق1!"##QH)3k("L.-C31ZSVSo]ˬlwdS7Cí"aCfWr;ս܎Rk2)Zzpȍ*8Сޮܠ6)Z)޺ޮO'VG( @PnvvoSaԒH t䚇&srHx-0 Ϊawk+0Y+ Th ft#gF5YC"H죋 Ai2 ,6CM4Ԥ=S4̰0 6٘ v02!3*E!!nP"VK a 95nmjTr lev>lDM';lr"gYbe9WvRlPȥmDXrI1e Kc]:!&vԽ rfLGԣS Հ#c 90+m7rsv-2hrB!яu&r ʐs13;) }=TSȖk JŸr̀ɔ fRM@TΆBpaqRH)\̍C>ED+ cVlh%tb8אhȗB`Èh74hP8uuCA|_& ͕p5NZQ+rz;6ɥ 1 WҠAp [ t`l^j(ms lH9ٍ_"AfD5'dK{]mo##={;k~\b2YԠ뀠U`ܻɱgS\ ɱ̻ѱI(cǦ)\ 3P/b Bj4jawm=mpn(HksIys&M=gbgI`]g-g:!à:Ptwwn767Z{<\X!?fIfoGgr rsևR^Gɽ}ܥz.]ŝ}CaQ @Gl'-ArovŪݥ.S]'}I_>m"Eӝr_|]D"E3A$|<*みz?$el 9vlFWMbý;7j~_*$krvMU{paµx: Ƽ9J6v/v|Nkhg*iAKaϕpq (Cㆅ\ F!VӒP_kx4?#{ĀA,{㡇;B<'f‹ׂG,^7zĹ|q_3⣑voiSrԋ/~ ,q Z^߇d` k{T(/ck@(RrO$+"@f?yw{,-6b<,X|ߛ&m$XVCoQGohusK4/>sKS I9iE/@9խwd\@!K$&Vi?~TeHBX{_.:pڜPY1@^b9 y,;ILT63v# CFYcɓGX9}vըqXs?KU 3G?Qt%ω۽"C ގ,NM},zO Hbf&M[{;+xFi ؀ϸ۝A9^ qk3\1Bm{> h]\2#fO8ҧ4IXQҾQt[Iqr& bK~ܦEo{ r7PpAP jEϥөrS7o-~ݗa5ERksZWyu{lMW뛇Y#/:y_n<'+$I;)Xyttӵ 8CD}$+G6wY0K2uwnwTo8<UH9=oPO7>qjoPuC (ƕgۀKjzh)߻q hKN'8nY~U 2c^IVXlUW?sdvjZ7^,uj'}}TGQQ5ޅ$f@*H&0:"4(I#!cb Ke(0o2쟃YbxԺLPK@1Sz`\bLo*23X~ЅNL|&;xK>+99T^2Q/|%*uW^fw r!>E"e0@ń)Ps(a Y"Y$S!XW$SzJ QH0P GEDU q :V{1GX<V)ͤ rH5JSf_;됞cQ3%V) JYUe4% C '.!&oӫ~-1%VpSPX TPfgbS8~kw.4A᭾U][EN)+ș MUaq|a?*jYaNcWD*IB) "dTʔi@4J#ʄ=%1` lFfs:]~k, >)/EU2WNR$nty{:-}ѣbƒ#_FX3ud#S"3-zzZ AEPimC!Y{JSil"퍎 lg dF&qz1ͯok]:g]\ߺKЌ H+V{.#pE@gwż _LF pURb2tQ7Y:2Uᣠ'30+ CυOLt OɝU>痋-!{_Ä"ɂz~Gh\){bhh6"i&&3\cTpqrihޢ CؗSN;&5;WR{pv ;\݄. 4aEND +Ai # !!;ik(Fc<i !1@15l>\ۛZ;*(VBf Nٚo=OCJc=xYxz"(4Ӟ {Ecgc&+˰b*j\RB ;YmF|)0,r ҵ2Ľ2Km:H&pX.Q4pNr [8t9E-UzJ-qtζ R je1Ia=e2tmTVh#y_>`r,뺝QhћSE`Fu uS`PIRJ2JH!{_,Xɯ/X̦DzHISpDB" c0i$NDJ/bW*\8Ɓ-0#NJSx ȳOLf+'wvrbT^Qp3'.Nfo H~z3l5o ' N*j)M#*HQbʁ*0"P2M"Ea @K0ԹJ,Of-{ﱥX!+PaW* j4m`ޱ iOB.'tB +D j呈@MXcT\x5}G6lu:ܥ["w9–cŔL j+\|pa&I^*!gX+ (|#r>S/*\!abOҪȋ\!VJQIc%>HqC5+?3)p+@б !{+QvrT,5cl^J4";W*C Xi`by,e9OGťo{#̍Z26fAI/U%Cc3(T) yBBkpg2K:xP!<oTt0q+y =4O%wb$O58G FWOa;NMl\5jp#9u7j/k7T%C] bz\4r=*!;-Iư4W}7C*CTCVE{x<<ܟgnSWHvXd(Ulρ=֏쥛7Z<:ul5HcmFuˏ?eO(9e5&iV*;[_o>gz~;]ZM P2s+fo&s6/2Pn\?%sn^tgפu}XLX!}/`79/&zַY "*V^Ζ?&/5n S@!>A@ipK,6xzADгhoaN8S:|2er&).٧Wpe 9omE7ڝY rn9[.uTFPh/h=J$$=E B+jWHL %l/wOzTˆkOh3ςŧm~,a|_H0e~'g(an-(o6> fo!Yl>K)JRfڸEi(%!AiF"a XĴE1NČ?~OL p!dmgpGUtˆ7Dc{r: .S my[{;4 K'z kRЉaϸw8ĉ." bLpE9GIA=}=ω:\7̉$BmSV_hLp)@R1a&!I % n "s:Nz(Ҵ;'K+c1 'ز6vQ,IJN$qr GFu4eZݚ.ɑ۞I ii# !Oŧ׼`>0JɞO/bB"ȩ}z{&Ο6Vx(G:udWN*UA5wAY()}un8<'`uUr u E#Y,PsBŅ:>@WC!;hD@.>ckG{T-j<2?ޕH!0J;sxO0$Q&si'v͞1!"ܥtl(nu;r?B;GM΃p=ע1 ퟯ I ET5r pIYgN~hlH ™Ќ,-Hgծʌ%Lp!L(8lU3 `(z(83~_ʃ/y.<]0k1_zݸ6E $Fn3UpovƲsb oHKkF~m]EgAMbg -` ٫u4yevتi$~|Xy{V{NDNs|?]*x}~:~'ςɯ̓y[Lrl]jyL;>]I?ne(cO6ݾm,rҖ_6^Rn~'eAVE2^~~2^A2(B0ȑ["wbZ6Ի1*GGyyqw?\5fu}#(um24O!Knŏ'7]7͓V  gk&mySV & ښ'L7E^2^>xSᘜ1Uc :>n'-,Zi18 l}7iEI ј0 0>wOy}K^}@ֳ4/__N;MM~^m(7d>yi%>@e,$ӔjyEiB^X%P #q E8ߗcgSSͰrľ[5~zÏ)$v_cýQnU0ۏhf>o?d?z07sjFz1ͯokN//TYy ;ʑigDfD;+Z W;*hڙDkgd.[a%';la3&ب8jmt6]|Xõ *Аű Fis))6ēWZR'Hh#TI*) 1Mbb]{S*.ؽg1z?\[!d@ԭM{~5c{x d7`{f4R-Ւ5 mhH(N_;}]p)&>UjQ{)+wkCm "Ijѐ  ਻*X3lbUYILGZIEhGHXdFYb듙s߁Q6iP7IO%AY:٤E`0Bg!A)Ed>}!xv9(w9%|/ja6*p|vށnHPY8Q|uᷙi703_aPxrCz$~ CD ͩZҦn5}T=¥J.9-*E%{yR) E!Tُ׹_:6)ϑ." ?9@@3vb7 "H$p9 8.Dž" AC)OLRE1|kBEb*r0@ 2Fz[[J;~R\V*k"pjΗC#_fcvBtFF'Y9PEY5%X{PyPg*ZB!f]]ARjc guyQq9NfW#Y}/3+Uu&SdjhS+C5(4f 1C]9[͑٦u4o+։ A,AYqm#$fwuẢˮ-jjty7Nm%MFu4)-N[Π1t a%`E҉n8C1b̑iKN #Jca)3TDNZ Xj26i?>2k ePcۑH)4#06e*0FZiUgVX#@4 K2"J|H@ߥXi[zU8` ˽]RSݭ[d[U6PgxҢsyo;jl`#e)khJS> )ubI|ė* Cg,Mb(x*A$c e$J2F[A"6Q*ZEi#cEQsCZh(+&{pb`1V?"0J~KV/MwL]F=ju `zK[$t&T+0T#P>y`^đOUH7tg$7q>DBpeaK@8W8Я.w&/4;/Z+ ʄ6;VƚDCM,W:= [9[AIa BH؅h4lI\`8j'j85fCl=w^(awv6$XPOu&)۞kc-Ps*D4E3FilW~:hA?3.+xIe]9KBy9]͸{}a(2ȖfUȀAf1;`w.?uvFldX$j$v2nf }w'WakP>Nx7nm.|o/=mmJh+ҦmJڶLnk T[oAokK4>__ cme1gَ EpGP` 99h;mOأ?ޞ{[ox@H,*3@Bm'I^]#0RS^ '@|vd&A6RSa{8%2}d "u&ik`ef|#iƴ5 |+ێ:Kә #(#oTJmc"@UʋhGEDP6ecD8akHOg3t=ų7лt2_0 aq ׭0A6f'zݛhx}AG)d2M* w+}ӃЅ7:nRXQ8jptSҰ& _܏np$GhaPEif=uX2/ȅ}L~iXĄF*3eҷdؔ*ۙߪCmWnno^uj{1u;Pl;;;Ս<1gǧSq~Aa잰uÛSt_%Y[=#ݰ3HJIN a1K>'xl,|қ?Yd- RO枰=9Olir2pظMQUz(ZpF̍VߘO1yRpEpLEpL0d=1ijC& j !i"JL*b΄Li8NH)(M!$ .ձYjX gMwOLJh%6#FK%QJ=ؿ6րN0of*"El"ڄF҅"!ylehl$B&ֱX*Qr0!It=踞JY (k\ uod%ML֬RK;UZ:iw0[T3ZW}+O&zQRdZIb_Mm5-qXU]c켙G6]AG%FAvE3>j'~Do~?(|*_T@oN889=;oGNNiKG( [Z5IH'MzVTMV >B6ƥAǟZbÐi'xs[y7f98G\=Q"QyQyLD1;F &8 eltxCJG΁m+mDHQZIL`O7rr4m\q궑1˗9XRvvsjiPTWLn$)$sxʒ.P}Y- ʒK`Z6Zڤ{~ VV1dpF3.V?c-g; =6f(4OXU +B*%3lVa*Dx;Qy'gr2ꊂQM|뇋=}/bX*pW{ԓXU>7/8dJ$S^O")W5q"C 2ʀc2)Ҕ8XR]LD%/lpMmj|'w!kJƁ(JD,#wH'Mjʨ T=ċ)pPjasJW: L5n-e\C8 =.ng1p\)0/ %Qt-hK _Y|&@.hPZQ|h@cRW?>r5ICaQa.+uaj~Z̑A t-.|\LD6|U`3ssDl:''G'1[Rw~kʂ!NK~nHdzPrިtp_#I!xDDhXNf֬zЂeXn\=X TVŗWEI|t]Xpg5}CB[)lwsi !jݩ<:}Zm௸/D/v$ZX+ɕg"t\m<s4d"O7enp)pc _R)$ >UNrj=ԇG57ܖ.(m +<g3p,5mON+>SvPU-~LbAO1;S;{xIwqF_%fр?vrAǗ 6ɶˏd&WԚ֐y/i5_=X*V jzl͕I#ġ6BMKX\7+*hT76Ew޽)-YZ*T 2j F7&S d46[,RqHzs"Xjn>I7a]=jHmk$)Kg(i5PaՂyt@R( 0\탷S[J! SU.T//7>͓V8ORqi}x. >s]ɟZ.^B-0b_>_ҭU{+#RY܋+ q1cc}‖2^cDy0!NZh]ɤr3MzD3Tز@ }p^?9KH!U 5B,y8}q fKrẆ4ro8^Ս#ApC_X0ۭ< ՇnsMыSԫũӶuq꤇SS1Pq!t3䠓'z܁jgIH[@'6ON0VR^Y,p$anό UEG<2/// =) k<* 2OQjLwYHؕ<˂@K6;8oSxtadFs% ! ,!P?(נ ]#s&A)B׊u,xl {K.mxP^ hciBpF T1!ҡQK8<1MձcOk|eqZ*^R49,sNJ.к&$x "op@J45oRpQK8*,KdB+ksNyrY`^;gA*C=D.v$pSogI+sr`oW%}yź}ewUka/b b׭_c8pAB^ŗ$ŏpEbJ9eWe.z,Et|o2pwˍ*]pw΢E<պ~:׭cGoڭ)#G@B@ڭxmY7OIN4,e*3Z~ iWlo jn>#`=j DQUDf NUh2'KG[;SO 晣fcmeK J0z{+.9}4?)Ѥ+\HuH*rKOΘRS+ UBGm΀tcj]'^ɢkZ,2Rh1ƉHT%D*jw7g"Aɘ*wF#VYvg`-"iUK+I :!!lQ,eܧ)6u"!T¾DzFS| g35ff6߿]Hp P 'wr|j2$k5 FwAh0z)^(9!jsqJMe!lvGgfJL s{mi;3vӪ3vݙPy${u-Hj&L''ǯ\OctjD+kfgT4̷BLi9嗂<kYtB%"0Mif ڌ&c] h('-_pD9JL_\O ~ H zܥyAZs=#׍v!2|IOzԶO< ̳9rN?D/_Ļ7)E$4uBV)Z' kBBBʬظ2 0^e3&vA}s1Ov5>d4 7vk_QXaBT1.a- |fYOA"]υtij ݍY -|+ :fs ~g;RT67@ܜ]:Q&@éMgi]<6߉҉ї"J߭ʠ Fl !1Bύ0ɺ}ZхcfM̙P:Q,t(?Vӏ.K  ~8Θ#PŎXrAjbUۺ U ayR;,a?$-8듸śBnHkAmk3s~ 6Z^R_/˛zQLy%עfi嵞Z|O55X|9=N8FՕ, 3+M(Dkm*T E7,\;`€Om?8/&DggciJp"XK/G+)213$n~/5jZ;б}ScT͘":R#(dS ;jo>Kц2 e5SD#$ȉM`%%"?1n9y@uRev;^(;?;~}x7x\,%uyWry{sVu R S*V4 bEvEP~<Ѣ68m# UkRDcd $'a_z. bl\._E,E<ѐs}h k_G\9qHFU4Үv6M9R2EЊ HGO8Bă@Bpd.KHh$_|D R->gpW *] ,,ljX(ސs#SiuA7% &K*b{30E{d~ؐr'!܆8j82X-`21rDZYNiqipa*u#1KVÃ_b]s=-sP~+G\̣55_[fj<~h'kwjgV;d j1UӦa -J^h׮.iO`y=x2e%kM3DEϚ`$55:lh nPMN@Hk)\4 T!<#( OVdG<#-C.!dB T1yz *^yu@C5~xFnM?|A V?Zi΍ژMk'|>Np"Y!\S_2-.VTC_qK@"Y ]uX)CupRQ6<N|cHxI8{)6 8I4Jiǀis6<ΑLF2!2pOds={^D1yI2ȹD``iࢪFRY@XXJnKx6 Cc9]^eL|&= =d4Syj;Sa_o=vBQI)K>AMҭ 0ogh;A `Tbe @?w/8gSJo9in,\i_ ݖnKd 2T>y s$2!'o?y\e!`l4MWAiV֘J~ 9Xja=F Z=^=ɡheT=>2E A:B7Q^5hx=>usCdoaSĽXOlh3TSQr($FdEAFRƅ+.@j]QH'x>~-,ob5c%Bs֑4V;ih!n. #Ƚ=5wj.w_qG!Ӿ,LjqRT.tÆTeB7zHå W -!ٮop+zX^ Z_%* {je/Wjer{OV4=6ZƿcDƏ'"0kDoy ϣ1_Vb^,JgħƗiuWfE6{Nid Zˀ=؇n5jhz2mx%\5k1  Y' fWWm%*pN\]6\rmU;cN"V i*JItQii*$D5<a/S˩$&_(q;PZTNkTkyEϪ_a(zn}1L7 84( TjaoWdl&y4 Wĺ{朹G zsu-| a=dE񛷫_# sOA-7ɒߣ:Ηlj@kA@Ҽco֊DEr*Z@9J9W|_7HZR\ 1@D6`~=WO? J.kG .(q:M/qqqqT7&zJry2Ud*L e I1 B3 e̦  ~y_uzȍ7V ޗ o| orV}<OeU5X(o M!nbuFrPa 'Xӧ_6{y4GmJ^\B6!!a-<$h(Aհū4uFY'HNVEIiPAՐru)Y.:E锊475(VCH@v<z5<%z8>+{u wTß̐5 .Pv,gƭ+M#Ei.M2ʈ"˘0D$;_viJ2ΈYSZw5CV$yd&KCAģq+B2_>ָ<Ǟ1. 6 (GAJ W{~ڋ̗5_-Vjf_m6;-o_]>kߵa_QR5[%%UIC`U.ϔ1KOosex^%kPxtw`g?<a7!OO(*Bсzh5yy]5juըBdTgdb,p.rPFqfLPe|U( ޼(j>v 0k5VcPz *w2°,J'LVBb2ȅU.Td*g,)|P.dE+Y02:p0Tü;F_ ɴS֑ ?B]1*H{+c>1)9t>'0O'5ihxPj! Z($ Z"-@öh.UG( \me \H 5+|ሠM# PIGy`nN*NLgp26kӱ1,ǎA{h]-zWLXhZ9ԁK!iw5OW85Own2q]Qƈ׸԰+\93t9eg>^} 2T?ʥ6FC16AuHyG(ݛuAe^Ww5^y`_ΆVLRMm~CGNg>?lzhW*O zLܾ-y}繼6 g9蒞E7<ϳϳw(34w۷!?F+~fP,?Q.G/|u~U +yrIT|}dU9db4捇^3^^޽Y0*XvUT\"G56 3ÿ(òͯbWdtXhLN%r2п$'E~q4a??]_`u}:9UZ3-0FWW/5_u{IO1i>rZY GDkky5)Ɠ06hGjOcSTJy=2"M?o?H=% 琇Ihj#k6*mЇZθY谵V7xgt(&mrbHO7@{qxs7?kqFam*LaonwU5p3ѧkǣC z!y+mDkm9t#88|HT*C[8Y<Αj'?iUc-yZe_eFM^Zβ*S7,mL0q@h kY,IB8I@i4 j4ψHAL鬲[@1jub&l?|Zi!Z2@)iO_6-n_]s4$ĶR=<1~5/8U%Sfy"J_6JAKpH8(Ӊ披L 9(G>e3ԪD*JL EΔپ:b}7+;!?Ӵ 8K p#l=:e{|%XSٍ<:Y2t@XξhԒю^a$tk*8!7=hl:i RT˵<I!w&K'] yF|EH`JtN6:5 ~/—2)p\,@SF&{XZezh A" ;j $Å$゛LģBzLPfE(6@CyDLе:żK^/\!8j%bjѪҗP%a.M8&.\B'I+3h< J!#DaϹ/n>p͞1. -MVh"_vI](Bt>$d5C;#VX~8SYSBL4IIT8`,a^w~qF:(q4@lPJ3ƷtD\RQ &Qo|<"M/%hL0¨K?#$5٪WHFЩ;y/sL8(0-`.Ɂ%D3*4m#rm'Үy]/,Z=_@xy+haTh *$f3C'n[^XXAvJUi+6.2C5r:H& _h^ٜ8ǬnuM yXS;!pvxڬ~' %);C$dEaBIM uQEdu<۳goGg{. 6Hrt}Q\6}{dO_fKD=S O*n?:A";P\=hOHN, n2p,(ppZX qBuM[,.Z,|]!g 󮲰|$4.qMn$i)NwgI | B#|O 5Ӎ{ k0u $m#p 㭁?zZ4ZA-&w#N{OHY)ߎٻJA (:;zq' Bُp9gdLW P*ohjK x{:^[7!R]zqn&άCxf#_Usj|UQP?%q9XM,EaŠB)M 6W9-'ƘL$-/Yе fd2k< ޗo@gTmV9ʠgwq=,1fV{xUHe=U/Xm7Xk]pJWVJb{UFWW*juHCDdT9Jt t YW@ƙY(JY~Dkx}r%p U` % 2PO!-+uuϯ'10#Ì| 30U=E&M2YX TF(ɝ&e7d9gɬv.#vV`*XO'bޔ%t/BI9kx\4=ALT K FDLi!LE;R2ׂH3pʲt6EWq^ƉܷJKdtٴ@[Is;qhE<+D!,E@$syjrrAСak6c'(YLfFi^?0aT*J.-UZT*Kь ho]D/BnGE0hŐB! C,RT<ώcwm!e"RVrRq"Ib E"i6iG\dYn4880a oX1X0LX0\UR1Y $g=, ȠXk TݬWs{Q$k'K7QTѤRO UvIBS[Xgy$\DqJ+Ny](@@$:yUL%0R82$F 9^⿯ײuAUR`iӔ_zX;@ }Q,TƝmﱈ|on}~ZA;Yg¥w}};[6|tmΧ^aڷysVgQ82kf@ԺaA=z>"иħ㮞 p1y 0~}St#2ow&!wgYT3d|H2t[;=е8O]hyg۴GGӳtLn?7¹lZgנQi/~Jz?޼~ٿMg[iu۝,Ǟ_?'߼ϯyg^㯳W>K[OQ{}{8$`.]1X—|Z-&(|Rf2]4yȱ_]y.f|^,87M}|ş}+O۟-9>Qgx2"wt8y(~7zlI} 7o~!&~]ơ^!\_>:%<,V$W 8NUde3g=v7oc"B}p7d@8  )rIB g>S-mvA@\INNDO(ş2AQ!0 X8HsVK?}6Ea]=ẁxWG.g>|W_ӫ%z<LML囊3J6S )y@&mnVߤ63}ڬqPtIЁWwoPv. ^*mhO(~UZ #U.摸*@9~4B}moA`vLկZj:cK(K}sX%h֮;v7Y-觶fDjJ"BJ18@N 9>x4 :N׭uu34q.2a̤>٪ܘ=f#3MPno,rcJ|"G9ouakouHV'W>_)*N'SY];nohḃ:3Gt)rTNRr ";/]oR G^rT"QRxziBV'nǑ6-Fk\ Stql7=wkWrP*F6(8f() ʵT=8>EF]홬Tr\?=aW!U $7{nWbVGɫ1%Af2%@ӂھHrm~>YcxCA~pC^J׼㘵i Ϗ!/k1L%9'{Rhǎ}+yvlݻVTccrgIK%>b¬ߓnh؛`㝃sP#:nmx) zJHv+$ZW9#>RlE6墽)EGM)8%p"|XbEX*1RٺKi?<)bR38D,-<{x %\i.Ǝ$Q9r2%91^ zJH'$9 r6 㨰hpRj'Gd=v38W#W_V;}s^KnbT[=RmxL?dQjA?~k޺OG=d__W B 2V>] MGEun4뺳H'.!x$yR荻Э݇(XIS5J=1 .F<`C~#F%\Li(&dGP,ke_d@29fCYȲIP.`|o'7v-K˛j)fJbMU^_UE(jV⏯%L3wfd;eMSy'XXhS(c|+ðf,"DhL+ǘxt|/y>(MyeB$F'W` &<  S x%xa1-D S`:J~j,}DLp (4든02W0+K61T $hb8Xqm0h Ńc㕿}Doع-oϨRǺ,e Ƴa=+I@ər(hϯ߾ÚIm7IVK]ܙ-kjq p`vz,HVv r "艪BDDRDvm;A=iV]ar:F=iVK]\Ct?ԱTEA%z(g+4RVc|8^n#\A޵5m$KNN-_TS7:zOrRMGdw)"1& Lttp6vŤe@]uf1j+.fNj&5汄30 ae/w:dO_+&)!t"b CNnQ;?%^ϳ|f.h@3 w #*\s&r헯JaW'cg=QCs]X_{1gW8UPIKeޫBUMހng!4S2F%':sfM93TSHhl8g[ &;HGwlu^CDC&p=%X[]#) >١nG>X5Rh91@ ‰9n i?ܔ2DA@Ď3X5QdH-śC&z6B':y瞞tn)|:i~%60' "W` flu^S: 3:03 f:4˜rb&7ZPbsΨh\Y8c3vk%UjN?rI6;giO_}6Lo1G0Pu|k~i /bfqHauIO\!1{K!=$ Bôryε8 b,5Pf Y# ׺}Ͽw󵋳J(ȞWj˴jWtjgEAt I3phDjD H"eHqzsނlGҵ.&+jZFC IyenF U%O7.C0[=jwwv0BŠHpG᣸~&2YPQKwT}g8GN I(ͣ6HQ]!lwX:Ya7*x%@!r F}7vCiͫmvfQ{;u-R`vgsH%ctrkp:owJ;Hy^_kt_ӏ}qǾQ]/n5"Z$mHD.w<ں"wȰOK$Ҫ!qT>(i%Y\8UV*b+nDL0+I %޾LPkD{U u}`SSlWҔ ~ lh46 爴XKynr,B29[4r[4I^ ypeQxu?_5;t~3{ZCT|+$>j/:%[Ebz2~ӛWn0ݭ*cG~q+Ua}śea}[LիW/gӾa&bo=[|6gLx-ƅRzKn+Kso(@g9\8Iu*]Nj /myLSLDJ6H Y5.rJ}YבNl%n oZJ\&e![oTi߅w㼛%*:ZN$.tlO}Z8!u@w˛A\>Emc|^ae#U#@tکhc:AMlNgIHJe V[ʶ'eP@Nsz̚<=f/-. yh{hj HcSo H-E吃f=W;r1lX4^ʻRKq^x''8Z"e~n&%L!/?L6\4(u(msVу7[M]ޭ‚S'F癄f\*V)޻)lļn7mN5b=¼"aW:=SL##iWϣ΀u]=8E=H }/lg5`*58ڠL[2;LhzlH=N#lMvd_MR#;8HNJwwb2ӤOݟ蜟.yO(K"kyg8+1ϡHHNԗ֕NM׊~έ4yBsҝ}!43ܭjDx߈9$!POL08<$..sO^M[t]s&o*N:\mA aSYpgХe:;K=mq,Iްy؆fE"o06!F@pXHFl86ahK@ `sP@Nzo/)tAG /9=KV_@Y3?@prg۪z+uGtJX)1(VxG=:SI$(˒,2GʹCQ>$L4k(TS8+ц0irCt|C Ԫ͡,{pjdҀQ'ov/{oR]mG#[.{яɶeb\~nD.4vSgӜ@[B`11]n_JոXC VI~l{]w^*]Od5sub.sʉ h-QZ'Pi q(5[g_u+̌Gap/{{ٮտ @FNM%$?HCEE"on&q$ a@tNws/\،;gD  rW6Cw])15il(Dc$S@ny9Vt=:fo-:O/}3{ځf[Qq0TB8O0+vZqK1( 35MFh/5 B|t]؎\|X$+!pR8Zzn({K态R25{O5G'6Z9Q^.h"8oZ5-g[W.!)r FZ-BHM)mwDEԖwox黻l>Yui!<K?{eCW[*oğe@yʟ0Kî;ct7!vF \_AܻsL:yO2gBTkg&k)VFH&}|24Ta+,W 84)Lr"$^rA@˓*<5|ҹ]L~*$O=oFu(Tx5~zGv1Qo1G0藱?y_!\ -jgֻ7?9 mFH*Lɻyhk3wi9Bf\'@]@Әx2o'7[E9LX(Wє~ Xa49Ә! $ؘkP)rQ5 ɳZ 5ejՐPEG'L>I}` dd#ͭ*=QR bxomoq*žס:b_KS(X` iUq%A(_)G3X .6T;XI _jO'[xy9;u`.sn.B.LC9w~uZmaH#D"+WHpG]jTw9X \줐wq!%d=E1%"RLqKd:4b1J9rY,72ʤ2A#4z#Lr`>-r:J (g]SO<&-q98rRFn7spXYV,~")E!PUq+j8b((ј]4P[-p9NsCVJ듂V]Za`e%y~ȿ(_<?̣\ //_a…[%B2;ⓟF`&!ADdgӛ!XlA!|kMUglFO<.Pn :0&woÔ+V,CRg)0k<͍fhKK2ڊuFFXE(G_xA W@ÕSjM}py5b`T4VP "Q)q տ 0(ı\8 &Als8/i]|qC"DrhhC:QL}3^|zZth5Û7qSsۋ n4)9βi'ğ\Gz3׋wk4^ n,mŝ+)ϫnCFY:?=ߕF/waĎ33ĿlCrcܝGBQܻ{?Wq\@z~60C\=U 6 Q 6\֚Sn(9BEioV)8)7&`/e݂+EK{1 n%URz luNj}(NVyMLc?Ʒ—Lpyd`k%ep2*!lf$v"W3: /:XZ$)#i#y,{P88{ti1u=ЮKM+*Qӕ[¹!ݑQ5UkΗ Om٫,ʐRt[7@i۪~hDy@jq &4e ;MsYKemY! 7]3b!UƋl_=x3Q2/_S A!D &*gl,IeCSI?+D!)O(qvfJp^?awry\'(.+ xN]; uSTkWӞQV&*FyW6{+Zs?u%|_HNv^l.Ԝs1^QXy[-6VR#ez9kyL{SjݱpJ$)ZE#"S;\r5;'+%vW1p%8``4S;MoM;q:뭭#H;]m)NV567L֚/dLu*sKe!(Kss5Z,!z Ec ; kL i*'j L{Zt|nP謿Y~UFtybֹӫy({mbTig_jAvcYShc};2c0PYjfXR`}ƴ`Ҷ~n|сˁ 8DS1+uf\BjF_${y2e/V蕔o mI{4=lp 4$sJ`뎒cB/VT%g~ [H4#GVM۫=Ce"̥$_UܭRQ6qT={N4]2 G=e{ &EZq:!ςbKkjkאCK~/h[ij.~n X/>ĉn-#N*۪ -g>o#qŵ:DjofQP%{w-C\5Q1?#͛RQ)*9Ra2I.QE(C\܊ {<;V^s$m:3֤q;^4#|DZk~8oL0 GybNc:5kX$,$="'k~Uh[T4w9W6 tz1mCХ MI p9,ec$q*`sdI$!V=VFayccJvו[,i[KpZi ձU9 uZiZ54ЬBHWiț<}{Kw<6WO~Tyw,5[ I |nUqvrGO GՄ[^Ij,k>!Xx5mq|O{纜U҅пbDGf^o=w[ܺA+KQ5~:l'qw1.]ܕ.O'njy`Ƅic^1Oȃ6B88\jOSf؞7n&"h­ĭ!x(X$ΑI쥶ZaS3oS^Q ⃓ƻ(y-$*]TT[;Ҧek-bEę%E \P +\qJ ș! `řC RSZY]+!qkŚy]J@3B9 @b)<ÝpMw>VX$*U0`% $JM:T;D@`PirM^{26`p%J{e.,.384y8( NE,5켕&0hjerӨ6!SGXN`GsƙE`ա\K^&qWNOw.Cn61rvb kq=ϗ+!X>Rųyvi7'?6OCʵCoOo`mM cpVo)ΊB7WRy\ \"[A)wRu%yb(K|CC(n{RB)IM9 )4k8`$m~_Hv7^ٙk,"&bh6I%I Ӵk=r.$ 7uǩ0 BDWJ.=9(qbZ8eu0w\r<|h~)^T6 @p]m.J603Ԇ`AS4縚ަ%<,Hu{@AF)TdPP8Lk/3ΌDX]   ۜ9 P,X4 ýq޳HQ ٤|fgS{Ok' C \;"6q3 9Q9cbŞ!;d-8O.$4kԔm^$i i L༱d" [LJ|%K&#HɍUiFொCGVLD -L< ƨ(Um3.eJc$~~,eI$gNG]y: +#s<&/c;FlGcEuk]yl%Z$\7ҐS0\[ΪR%bݒo2>tۇkG׮[ LZ Lx5p2ps֘\{ 7h8^lz瑛;? 냬`F\xxdOcVo aZ:WE/Û>hcI?N2'kaOckw睏C 1"ztsh~unGU}s%u'k:ǟ1*Oǫv ;f [K{H۸m5DP]?D7.~cP`F]g76M8'OtUpht*& !v544BB չPh@980D:x40oǁ"(b#n='XJR6k~"K#*y xxtmv5N鮪V6X18`[;egc%0!~dɞ j rVfڭHJؑ8??SWDdE\P\DRCD"o!mci|pm(SqҢFv.IIԝ/OD-əgfggvgwO.pj]]1(?v GL` si~-GUn֝]_|;|=tpE;N_" Z(JnKyYTjDb`rwGKWIJ7іrzbS) {'ڥ Lݽc^K{\tlxnfI"Mdn%?ppKWNjZ^ 7%ùvD9,Y|rO7g#X3^7͕HnJ0^|dκ?8a0wD=ƒRPAd^$[.YsbJ?yªř3j*$WSpF XL N#b: KeȤ'ג> \[X@6Y|x_LD"GФ?tO["%BM2* HL)s*FEҙ=%%0XX+9?x*esgagEq|ޓ zLHe+=̥MMB'Il1h @^AAN4Sg\rVY_*K8ZHRl_A\*ziId9jC4˂~o"8HxE@X3( 4Rb{ C?5ŤN@hz[dxԖ-1 :TDF ڎs0 $Ċلٳ{ş6rJb7cSX*26헕X@UV,$Ӄ|=uJlݭJ zjoz3U!$7nl*!Jkwt \61{rDžu8rrCYŶ`|B^xt3.#'CId:/遣k>5휛-Ƣz*rw sfϴ=JQG_dz)_ֲuÞ2NvXks`ҘSq42wVD@qxE؊OLzK@b%H9V;qi}}Mwg ]ǎ>?-}F^ڽ/agg>=}RFQG5MY逸n5Ԥ(遆 NjN]x98M˙M gQ^y)W롡'J{E4?hI¯CƧ?$d.d:s$z[.#m㦐'y>?7M`ߢ<-]Ct٨׾Z"I>_k"'=$CG2pO7}Rq2MSyyR\& Q 3=7k0-eqhzOgW"*MK}ClXTXk]&u;i ~۩z?|.(92F(]됼cȦ&;ї`82+/;Gt&"=JbۣP/Ip= m ^L N"f9ed0O0Qqj }3LemyZ[)XQ~RU Nͣ煐RLqy^RP S:#Y9~B@$\ DZk^NnMxNqx?=HçQ;"$^1pbn=bFot}%'{!)̏0#~.#E> ax>\G< l@-m,'-"Aܙ5;g+"i@`CL>A7[@@HMK;I~:eI.C UŘsGzJOq!ptm9$] I+>("i;\JhۣKs갰sdP ]]riBv%ŐF2w3ϥ4ybZ[RP;+ zpXqS»iÎg F"J9ޟS^b*BZl!PKloO>|+ݬ+ޱϐsx-LJkɏYmbu ju]5 IB[8`U!D[2y/ŭLHZ1EXK࠹cܘ6:zr7}{XL'Og&9;:PM7K_`+>%Š1§srYf>+i>n:=jrge BWI\7[9CTLܳoWFyzW"ÄT\eXqٴnV*dbauzg^Մ Im+eu"]N:NPYQڞD(,Ѵu&b D 5.*єHCF@b:pǿ /^'XUbRƭVNJe?ޫЄz!cbτ4ƭ\օ6O"k8-4m2.m#))U޿7(;?N8ٯ7-~KM_rWÀ!I*O8<$p4<(~IL@tFa28f4|-TF~6GMFHukZ)-odT"_ߨ:0 }^rV[/n?Èr19ST=3* WpEE̩&<=祁)Y"09('9F* R6;*L|EaMO/vz *ȯ=f/iBM(i={Khs^#+W_ElBwrMju=KHhva(J]gb:M~~evҡN\B-&--dcc]l {8`HYFş˳HnZƍ49I>Mx)+IBrgfv8Wy$ F4eKH!،;f4Kb/tX,j6f%ǰf˃I Haa!jOSY ^xC3տ(/agE*K%u޼Yn +صo}\&ʁgjfacO*޽wjtє5RR|j)-_BnMɤwgrGٗp0oZIc9tuSSw5mŢcyoWQ,s۟Q:hJKgU_nV}IexO=uofxR:+kIU2 8I:}3wN @KǼU,tqr_e>qj `ۈ ]Xsi j?uC(a2( wrgm2aNwa&P,)*6'$ Zp IHI|,$1|lA̸$ؑ6EŚ/))$_B[G)"[XO%~Q9*d{eV(}v!Wc!o\ܡB#¥[$CBuI2BkWYp+y(B۫WFZg*\=Z辌x$GeړPJHL3/4X|0j~[r2WBB&Rɪ=A6Uɪ%&d-!)!]_>(jPl-x);o(ﮆI0R X1aR vPkF_YF O6Ǻ@HʆJUB`*!ee wlBgM@YNQmygP^D~(\ǣ1A `h]JM =ea8섣CxYÎ*y_h$!GJ.uwWaY7Ny^|'<(wAqOwQOj39Jh7}9Xs|ytIЅɫ?>0@ڄz@u gJaǦwM%RX`6"~-a!pvOw5 S>}>=}lã]Ц3 .ң x6EeZƥS*Og]¾6ģe`cua큧{B>Rzt\sM"z@eUp+3dٶG,_[=&paJ8Rs"ImhlpqH"$qlσ\_@zex 2]#'.I'?jܯ}xb~g7{noofJi J^> 9iekz==D\ ͏][o#7+_9,ib$$'/gat2֮-;<ɜ %Ynbu4d`.~U*UGˎ[X6iٚ:|#AA]6vgI ]aHZpҢ%:~APA*i-(923) ȊԋBg]p:ّT] w ZQ׶⾐Zά~L)Ívy|Fv[o ! AuM7`(ط_RMO6,5 Yiq[jap6`;gu]dfXk p +FJ)QЭO2ee+%g ifę({C.o&#J|MQ_O<2JVvB9vDH0D[6c?>Qǝ_8te.w {7M ȃ璸yf tBr|ǥ΍|2qbx, \?a4vn/t(hO1_n}󣗑 kU;ֵܫ8N.c(VOsd_X!eך l"EIVPwjшНyBDڙg sz+}Ԍ Ҵ}n~Z5}6(Ҍt Gtb! yU荨NIM7:GA UhDu~.F2٥ ^HD7MJS@r/Q2B^I/Nkp!"kq+rꂭ]fMVwB?ng ?|s";*HLG9q!)xP1򈢵\dhn x*팤A ,>В7v|\urnB4S:M<3,c1=< Vehg)zDUKWI_NJU!?^z.+ c&6s2c.Ҳr=-~ U,߾$$3@W>\GLy7~8qC4R=?d:[pw@o;wth<ަaԏO1n-6-5˷_{pT0}]Ho*^䤄o_KNHm-N`k;]5I(ֺ#nH`0XKHcTl Ť"@,ZqU-^+@bO5ǴtY2!ÍJn6ڐLZg#IyafNq7\ʠxVEĀJSfe/Zx+iB %{e. Z6V^MD<9NPU+řwiiRv K*l]Wq;2x}4*tO닧f_JnLθ+Ӎ-ߚb.7[SUHxJW\[byLNy@\nNOL{;: ]Q)o*aT)M=%!=="ժeiB4(e$]_eHkױ Kz; 47n8wPUa^c;nnRufx}@jl=p:l:iz5K0V2yCIJC/;yi44͡:ܑ,>%عBnJ y'cZ_{ŗ .qkmScZ:sBCe\n8Ҷbsdo5cw$ofu:߶: .mBvTz햺1ۖ3ʑ^Y̭bRrYjxr&(c[ZԢ8IFb ۋ} :hLr`ǜPѢ [|o߶5[Ipi6Kit(&LZUh ^U( {ۓW7 >pR( TF3FQQhɩKB$Jt}b]I.5ɳI>xu{KvGE n=CΥ.PXj !H䠼cIF]2G@\"yn\842#Hb`O5^UeM8^{M ˔LeYp^ b5>:#BV_ktOehjGM=!K1 \[9ќR"Kn.Aq! "9Ŕ|={:LWa_ (i*. E7\~GU?>#ǜ&c$2\ɑŔ6]50Z/ǟ.lDɗ"g[8˜55_5S U|8ႰWpXIc Iy|bkpq]=zMA\Qi^ j#2vPHD]ϩJ,0ҬjlЂ{ øe[%ioD}!׈ <= oҖk>O}hجpPϴVM5af=LrL6 /*.o?W"ZS(ѕ]ռ"H6g-TX½\]1 1=,([—xp2yïm8{ 3&w5taN:xƹPfO W~Sޱu][O䋀R6ri/id}l{zy# m7uF?Q(_87e$qu~@ 3憐ݗē5-hSbmao L2fbA&TH2+~768 ʙ#V$2 Ao7V$$J5Ĉ0 @xf_J$譺ޢ9 >'[{ojuhɷl{ JI)I y+?4I\mٔu}(n9FPV}zYǁƓ]{g -s}+49Z~G8' g:{kDBG?q{}ي3µzp>Ƕ0+jf/F ] M͐# ۡf\խ^Gf yvD*-DPܛ&zljD&hR?9ÞOgFt=S Q'Y`~p>wJNt#ٳJ;a'VjMn/Ay•b AD.KB5+DeYML |c\K7\n7Zu޽TAPRo߂- xNJ$xGdC;$78*Iؕ#ݽč U$Τ$W/LTMI%)I} gw䠠hg28NɥKى'"ތe͜e2gN3E&p\I=%P<ƁʣfmΊ+ia>4|{ҦqBsэ$s+aLv%7WJF:x-lJIv{%hٙ tBc䂽w<Ϥ%7uӗG=]OǾ^W,HqAlW 3kpX `q  31V~Pof:}UV)%О-QJ#$F|NHnN ~fv~EsM|i$r._8tOsML p71|nrpE ȕq<'+M %,w\u0%+dJ"UA] >o -B~6T_n}᥿fEUѴkIkB %!~vnu@[8( )7DLE @?0`:;.~/(rH`X5?zw1tT&?<ݍƟ/S-~.ξG<,qi< LZ {k`޵q$BeOv3L߻ˀlo6%B_m&"}I]FԐp(RdbX8ꪚjmh am,w8V Bq*BqDwKY,QI*e/GuPwfJV,@95UǪA5#\6R؍<͵-QIq2չ娎_0SE (X1O:QSs ɒ?DYl̳|?ŽO(U0-r_ -gppI,ΛWvTi$8 L ʝ>Jٍs2dLD"@ : 5i;;vCoGEpZ_H {-:os-scДfdwdM 3-ԞjQUpW\#̼f&ȖJlhKMٚEHJ^F3WTWoz^{c ka3qzZx/>CO;)j]9bbk UWvi*9WSz30-ʱLA T}? xe FYGKɬvTV 9exLd骽ĊKwa ;P M!F@jSyr4jѤQRLo P/D,d&Ni<>V*9,d47]pCHwV5nӐČ Z ~u4iЏMKrS|oۘqh|v|8qSAj'E~5 ]5e'BwN>+&(C::T8Am.CZX4^ʧԸޥ!{iy|KaG4[ k͜%K˚+ ptuF~l?b?_1ahFS,:9W {NH[!|$' p%+*pg6W,%3Y>f5lwM-_SΫyݠN6;,2 VbLN喑2v.<12|7rdB H-~iBm$p՛Гw8Z Bf A4)\*g^z誐Ozl7 GDg6O. ԁFn]a>y XPz,,(V]e-Y)ژgvߑ2] 0K&qtE҄SdJ2I})D{DO~֫N\mA! 91k|OY=. aF.Rp)7{/Dږ`vDIF]}uoIɾmb=ڗV-w=e™RϺMؗ}G6s8e ڀ~  tC(OLM̀~j 36/*g<H <>rDNYJN9Tn1('4Mn=B|+қYM.&ﻬY:Y:#[#x$ɒ4QQOB ҃ =sxd<g"Ѯ(<:aFzIOR5ZЋ!ٛ~ {Լ|ǒ>߼Ռ[8$I[T^.yNgzѰ@#%SaXkz8XO>a`:KP&-bD4 ZrjfGM!EZ(1x@&"hU.5;:kl[k+o P1uz@A Dp ĉϿ@#e.>HE% 9IIZ!8U X{o>37;z@ Q@" Idh8|Y.R* _d-2~߻_ףjX.`'^h)g̲,z oՠ!j1dZ~hYJg~ [go寴eILŽ!hCA` v}Άv k&U)*S TDB6!;{'֨ <\黎Gn rͤEk(ހw8K2R;"d']"i"D.ɉHh5"d$+v%״NhH*o듊&B,.z"HxZK}u·>on#+j3^ݑ=o)-cLG:^d JSUC7$L^-0h%G*Wn$ ZğиCH2QI|2Dq0FM<[5(OiN?hg:ߜO1TunVNSZy5SNfNmy9MPA`T{pP%1bj=Kæη|R%bR]zS$B1> _ĥ"fשx:ve/Y׹"ȎZ-Y{]"fFDrlG&55Chtv` f͝hYvG} ؉P5K`h,nWK7Z<-ǡSek.n/87p7|K[+u|ꈜ]#]5y;yСɎ+*gmtLFI ?$F[&l@mW:(Єgpu!}iwIǍNhm>plj)';%%0 g)F :*ehL OD{bj0%FrG g8pGe JC`,`xh9J}G#ha\/ہzT'DZFbp28M` l\z&f7(ې WbEvZ=rWfpY]XA5jp<)Pq |m9{Э:zfC`*Zf)DnJA*Z}'j"E69#SO m"V @na%hUc')ΫZ} 80I&!>rH9>h4DK5N\M)V8i=lb]Yo#G+^R}4Ony`̶k^<9+ZgC%YEn/"##B,FhU(%QZReFpX#9A73Mpˆ޾CTZ3$3Gq!0"8ˆBtC2gsg9O0Y):(:7=[[~K!6-Qҟ&/[1UvϦk9Z&%/0N&ûCYd+ 鷪V(g򢔲ӧq>sp,:ZM&CxEL$%7;ian"+XVNSd YKH3ޅϠI#1;,5[9Nu[ w/FIZnCu19ϲ"aUr/8-& 2 orf8(L3"FF_"bkUő]>r*@I牟OĶ՗j#|#~cY, mi%P4- F]Q&{>.BVDҮ(.F^k5̦vL C&rLv1ؖRTtFݴ^4]KjR_#BvKRC1ڟǪ^\EߎMWF:~#yeIV}WT8*E<͸3{j/Q{#mrn5X| oq&$Ҽ~˜ot粀EN!ҽf` Js: dMM0F KO/7}aJgetR9֣NꋾA*x||ӲQaoPix:CWF--R9tqrXy:G7r8=NKR}k/ jaɎ&#wSg)@)cKY]X8\RTDT2R9"8# & *4(-灣#UVwv<ϱz;Y]8Y}Y[4 Oqg݌]bG_ĵKc8`7 NAD' 8=Z:&hO}{.wv?h`Wѷ JuJcAWQ-%.*<@W<{ q<]WPsۦ9ŵh(4cR 5Щr^'Do9"6o7e8h_XT>uúxIh3y`PbMZe^eJ;[oׂo"4brb.&7pb2D]kŪWj]LD=ABRs)6RE@W N)cG#ʗ+R4>_NIJG(9@vZAU eܤJޣAޥFЅ叅 i`&.#a$cQ9r` 2DFְxBm?ߗGTޣn`7m /{ 0>4l ޟe/a7n}*c|xoom Ws8WP`r*X, {kCTqʜ˶ 5z)k[ngaѤIy~)DU壙g~h͛~l&F׻yWwO,5ŢkQK9!{p"XK:l`1ZV2$I U3/,$U5ߕ$I#b!|ؗJ Tl\ *WOnRӯS7/.>YY˳K]8}"-YU -}f=q9S߅iEc!oDf4 Ҫ[S rLiu[:f]?ָޭ y&ĦB''y7InM11mnG},7֜R2ӻa!oDٔ>|zݻGnM11mnG\#IdoޭqGs[MpҒ`-|`oՌPiq9ǥҞGFI*)*RKLnӴXl|vDu.p:2̈́XPڿ2ߞB24rN )MGUy ^329/Y1(g]Ew{b?M-mz(ja17-E묧z7>:>Shcx [;ɦVZ^**:{L3 e@ޑ .I ss(]d% 銪Qf+f 5]\ Dh۷/ebOm8Rݶ_p+ĄjWs\:PBNI.*Y;_ɲ Ku $Jso;Ʀ݆Bl ;iRi\dHH+^5|4"\Z_5r{EW]{EW]TU gtԑH NC scv3,%By/2Jxllrg$RAc (Y\ԼO+Ԍ2Jun5 9M7⌼B|5(p7뇫|ȫVgQz<$ BXJ>уQO#|PDy#MkZa>u̿1-MYdqjQ %ӂYH2Eb)LQeY(*z07LbA{0 #4mH)2Mxp+JZ`k_,s25FUtks[S&Q+D:IoӑvPl 4'w?=ϳ\FP|OoDCNߐϯ*LW|-}!@0!).?|u^i2+|wd~g<On z!ƐRjF)K7TkRMNس5޸M衝=S]N౶J8$M@vb>R>. -m2 Okِo=eX]Ma2b0waN2ī-2Bt杧bd[Fz $']<9i'[WW^e&5@4<16Os39gx=7Ѫĸ|{|7<ס?Ó 4 &&xQXDZ8!*46:$!(aL8 ցjDj):˽"SjqBw=zfJRSy䄕7SEy\NLy|GQ z| 6[3~s30j@y[p`*=dX±o*X;>ՍaRSC^/IV=N|"D \h4xΌbprns-:åDn;C_ї}wO?IJƪު| 5]O5I0Ka1mrw,ǁ, \)l܇`6렧]^ۉ2sPhKGN;N:pRX޹0nǵr]>;)N-9fgӶ.=h蘅nV-ދGs "l袢rD>ޭ7wgt,ZU&EOd4Z1PL~ݤxRN\snsdD縨6@2 TԤPJX91*4y,%[ &hiiR5쎧cea lDN8f#\b"1JaF$!˄P^+xo (m!l:+ŝuUëTx(Ӳ4|sE=нI:7 \rJHueސH fhvDlR0FǐTv BQE5d["H)5IM$-4+EMZO]#0RFf \Tpxp,1L$gi fZIB'ibKFVʥ~sfEhOu#f&CL`HhEk;Wjm pr{s#if`<#Q%ҕߧL߃صmM.&&9AԨ ?xGQzl&`:ԂcaML8JH๐a(ǤrzsLϏEj}ؠ RiOOfl>g&tytm}&M۟6Y)>^JӠӶ>Fߩv>ɂ2>?= ۬ ,x)xАW-߰Z|u8 !DZO+oUPv%3a*g?)uDEt7"\JT28x9RZW?t^σw.h:n[O Tl2xu"/=q4<rA,#R@!TT __M+1E*5]G o7sCtY#o.7N42Dq{I_m%e 5Ȑ >d-(gPV;_οG{Þ@Yimf~\(tM˺,y[~%Ӝ#D-$wjpJK&Ǜ+RȶKb.~-#~64_J"qIb[7Xy6XZ"H^<OnhDe95h}}y9[PԛsF~k0%S,_i0UXZ [ZMK$i{*`!}}WJY_zVߕHN;9u[TVsP#@=aH_uB(-"لCfaG/Tw =:Zd3¨C7ȴvVjj ÈW >Oy@ %jO|F]/IO0K{ Ie Fɹ{ sOMQsH/&gLoZ` 4{[!U4rBʥw s©:8]L!0|k^ong)D[jT{w!PD֚]vnw,h, ./Z6n\97I+ɁnG6 SStÚQ4p dhH,$}+_?wv$\񩼼M˯;'`'aذOaإB"EnB _ ުA=8RW]d c|c"{Cb2_.;yţ߸:z&v9`fSܻz{WOq)h2TV2[\X!3zJaR%$aK̈?|*[l6)7|Z|[fYG˧ PX-HiZ$9,1s r2I2cȒA K$P08}u)Pz3;4bWT]B5Oj<6`7Y,Rxe4r$8 Ab6W ʅ IMX.scu8w2aBi:kSolLI&F`4V}e zhs}lP元l4kzbf7fy^;彯~x,T"Bd}m(✼uպC:&Y;ifql 3pYV"I3 aLhbxߜgY܀0ڍgd l?^ ]9 O3_9 󻫠( hVW 98 5 4SiB@`%V*A4PEQVDE)HCX&_TA~ޝC2N_e 9mA+mA\1pqǧe;ZP%0BdW @Wߏ, "U*߿0/8?ny47Kc6`4O! !'؝QzU^tEȕ3Ԣ7Հ,Z5Zh:>y~]+ށ3Yږ^\}^.{wqqw|nc>OW#౼Fט[M1yq>_?]"VW[*FmyuAm'`K~L-.8"Α#pC!@ܣspt4ț Bx}LGG*TJ(vmY[U|sĖuP^AwWzD&)}_"j.M6%3:Pcvi"}\"Ѯ ux»us^&U0ޯry).=KEC/%.-SPOI;On-63pV;Jot?܆tQ !BuV)LBMJ17il0r%E g4b%M$o0%*_JۨHmVIn3:Kv Be"۳VnFMpF|Q.U"VchGj.| cI R0bjsDDcCH3,!4υSzJyf0aαCDsY.eD8I 9e9cP/h@{  ?oHS!ztz ^wڄ=}WAzQuβc vZ|9^nJU#=68@5,k1n-\vV{mRD4E+=12V{+rVreWbZPkX5nƝYNEf5/bXnǮ9vEnf=)fr^Ғ5DQ!5iV*_ݵĝL DMbtE ]j8 XoTMOC.wU7sѰKFN7 N0l@Td:GGO8ʉL4k ZDa,_ajI{QOp=\xꬨ=Hͳ E4H8}\yK}nN=h 2:v +nmHW.A2ulv*/vٻ6r${irdfM:FdIdzzV; $]UX*V:WC+A0='ԷM[jU,Ѯ~T0,׎:u]S4/ӄuQ <`M Xi4w͌*>Y+X'l!F ⼠=w;\e7nQ gҽ ]1"#JغEhMF T뚈 $bY&WE^8p'$+ 6Bf '_!s嚦~4V d _^ؒϺKӑTvh:1S!KS9yE zcg&3fq =TփqP9&^|>䃩{ɧ. /n|yg䅺e_,Y0P 鳎 /X3 c(6)4p*h]őVӃ_)sYV3>2JPuI+ mN nt ˸âg\]Է9q81dr@}C R/ńsqQ3 wD&SQū:ag@ T9{Dq%,kR4{Ol?"ӈ*b 4ӄjbBK)2TnifZ?A4gwE-Ra$$M 6ҌL6 XP!f1H%&=;(Q'vuC1b_̴\N3-:Yb*Y-WV\MD"mc͟sZpO v%F'֝6'Na56#gMR]V|#N)#?S1w̩Ռ)DF;ʷBԙ @.8T9ȍhx٩9GG޶8V74HS-uB@ &*ˠ0eJel V RA2D0yFR&5XhqBad֦$+Dsw ,42i'phZk0r*R$K FJi:uvG}oxMC/ZTP~Kq }?> oF9ԕ?iXpp-n&78*w%Kw!;הr0$L0,le1^h5I{2B˺Bx3䗭'y~BoIi0{&z,>_ƃaoDr˃~%`uT8>fי{ĸtEGU/o_VU?Qgˡ%rXIWp&Qb2!5>\7/N1]O݃>G7w77o߿Yaݛ V_~??Sp^=۫nU{o߿[ }4YJﴝ?/ ՞AGzTͧ͟P4ݏf@LN~d)ZR 8Vg+z7+\ngny}z~0_,t}tMyГWh4) gp6{賙ϑ Uj6y2? qcw9i_}7Ow_BhEAMA< IOMCnxˊQu_|ݺ_}/ (i2od=14= %"N CUjN~3| x&lxA9n j͠^oE@ ztw lu"цcUZAne9l5uy17(Gf ."-@s1#DἻKtQ7;?:ij_.lM :h.:>cz5Śi ppc^  F4h6:u/ k56} -F?1G[ih?, q;t'߷β1iрVIXem{C[Zg!bBvCG| ZEE.B=#z-%5X8ńڃL⦽͡8ĬK*٩77(h[ 6;ٷeh BC, i":ԘNTWR?2"1e(^۬hdRJ9@& Șf̠X8\2'̽SQ! `aZ c 4PSA mF,ZH@V"H!$R*bccJ3k`?ѳY^w爣] Q^EtQX32a(4pk hL8VQiϲD? ) pQV\X F"j8G/ :۝~am;wbF"htEp'15[5A峣} uLs,k|Xپɇ+|>oRDZz;^PjBިIL1^@ߝ+E%ڲYRѣUi`Xa*5E{{]-*qvIHY/Ǜy8a[hqhq8 T_Bcfar|tQLIk0a@QӬJѪcm+| !P;db$& XSRĉ*`3,0aBAd Zb@ R(l0Xr>WN. ^K_Sj[h|  ?*d<XRy^b@P,Z&;(b`tŎb:IQm`rݧRNeN)T6" &0|E1BzLj6.OdnGD)xP>|+rS/'ܑV8wFjjdILYuhvm؃EDk=S.֪[ہ 'fsM 㧁Z^{GDuͽy>fkírz)B<-c;e l5BIle{ 1A1|kA2)o K/l/1u У2@hVX[FRYwH/sAcЖtYݨm1/K/h\P?:'S ^%ۋ7 o5LЦ-ӯ p5+) 2^6(F@D"Q^9]+|}RƩe:ZV]u6)}SKp-6iGwoSI, >YZ$UOGs٧m 63ܖ(;F=[3*0Ӽ0 ~3oUV֏g8)ybxԐJX9pp~<ǀ>IcZN>ŷUTm: `Πqۄs o]}c f6X5zDSwdž)ij X}6^k"t?E6jTK;V!J՟]lmDŽyYK<}ˇ"r0aDN*8>8Xh9!6ל?7޵WⱶS|p RĝKpaSNK{R!?gr?oҏC| I(vәv׏7nST6߸-X!NV0:e2@n.!vsD 5 0~O>Gu3x3JZ}NPD'[zY`ɯ3vGw^BM.6Z3)EijTSPNx4h[H̐DdYŦMVzlЈlᏼ_l={vWlʇI"07]»Q~ 7BDPIMjup)Vfp֝ u q>DBwR9kFG{&# 5-c{E\5DA%;f]WeQka̵߯/Ūf MVjoٙX]!*wLoeZ9g淅07O/QoÒIzg%lZ |%-`=1~{㠞NwSoQEa0JQy 4@*>RLxߡbEiĬfQ4g%ŧ|-)[{kbt}gkPOqlBJ׍1 r^ڹO=2}xRE\ER1>~xԄʝEk&SBaUci3f ydFҞf(n0r}pӤ4 !.X-a 8v5UN~Hs,ϡvF-~,տC/R!ywE(뒽LΖ]".K".B)hϣXHLtHu蕗ͺ) K?Ei5E?U∽/.:jDxs:``<7X X/s |t0/Lѫ:#ºg0]VC`T>7V@]"$QjuPnoaLTC5;E*J(^9T#܃b/o ރ*/rS8Q̲ lb טH*&a0Z zYsX-1f4:qV 3烞f(7WV)/¦%FCRdhȄJl"@Ni*NYi fրZKI+zfL=jBMG)5>1íQ@o-Ͻ*Dd&X rGV{JeQY,X/ր%{&\H(n@5yЎ>PƉg4kbXJ \I R.\=X=Z垫dRq]Ix !! *KpowDǩ b 4yjd D1"雔"'$Wc,Z=7,m;Drj!Ыg㧐zNwJO, i8x㶛NBk(+V.A Y}\KZEONGL9F}qO/==ǯ[EzWfOq uWA\UF)cBRwWAi4i4U@)T+fЬtz'@\̧zNWw>T ˮ2qWdNug⮤- 'mUNӝNSACA!q4T:MZk:M%a T:S %!t#x_p0:e? 'X ɍX!X^'FCj*ղ`t\ckizCՒ[I̊U((vEǯ-Gt5xjlص Ƌ6u8 &ϑFcm)9)-L3B8ENA a'+ Z1*]07,U00{$1@~^3}2,9lG DjpCM: Uuѕ>T CikX*(-0Ym'\8RIRg3IRb2O|1T͛*DF4R1dgX ]8LUd"_VITIW\VR&y")nҥV+ݢMoVCZ-T@,;ihr%icoճd𧊨^'L-|'C%L-uCG9maIRTuWt;&<_)6z ׹jK_>g?{9^9IƉ\֒k}KD1Ջe/=I*ћ̑/CgKzqu3XV ñ[M9Y<'܀"ó|?16>܏3][oG+_Lu"NNq2PWD*$ei6%H6Y;bIlv::Uu JD7 Ϧuh_{c86J(E-&yq Rc$M|Z "΋~j)G /%*e+CΙ=KP{<%|\)"OeV%q-6 9|%PS[uH ڝ}-o )0ozKqvGMPXh#\xpFRX8#ԊP-UŨ,HPF',xw^F)C"U"Dp&iPuRQҾ.ɨHR`*v·JOMB|`FI] ZZ<][["a[سCظK_Ѐ pK ?Ih,ҷ &╃q|rKBC!Oiyy*DJ vyV+A^"uQ u ӊ̭gIQC$cu('gjKjԢS&K$΋{'p5!6*|$ߦ=J`e#Ms+)fѧ2cvac>`kJkJjIoKG;X蓻S/yh cw vjoGIz݃Mh)!o?L[ ;dh)Q=_Qwey{ANk)&Z) /Qb)dZرx|gFg-\{L3n~wv'w{]OZ)8 03Z !p? p(}PNR z,;IY[(ןeAw7@`%A+3:I۹+͏W_[T ڢC/,&vxgc8;8'y*s +oT_& VyĂ ?s wkCTZ+5MPZ8@({ه$u$i QUD1a`,{!Q3m三JAN#&Ad a`6b9UZ"[B| cA5w LM&z3xD㜛H ZLP.Z1w:",v߁Z`F6(6>XM4@L4("(XL]Xx6Z?\f2̭n-?󿼨W*X:YQ[UE.]+nQۜ_ӻb1A{젣|,6"CSQOm*4j5p jE I81yiFTOEC. llT;oHI唤BJCH*RxtɎw~"0欛{lCu&i5XNnxϵ5Ħ w5(K9,fCOR] Wx/-eW_aHWW+K5>F-unpGٺύ1yfwo [QHwMr?u~,H4VϵPII=qqQ>q=cU:|b) P1z7`l\ïFcd'۴n`ݚb2tQǺ. ZG֭qGS[# G)8;XG6)b-|志Ŋb,ij#˘3>3D[ʵ Mk'4󎯎Z?vϾg̐:xңE:Z?^o33s=k]JQ6TihuOT&7ϗ9ӆ9ߘ˺gL?"Ǝ3LQR&@[ծϕ ^V;|#*.cP]qrF\le~фxg2H!dxSt9T~_ Ik9>5-\J@+ rfq/*2/U._?3pǽ) Fιy?WSrUhAbpgq"Q$oK > yiLk 9k\R>A֪HFҙ:|wQȋ9V6|^W?܏..Z50S>U Ya>ZP.)X^C-zBNň=s Z (JGrݪS -Y S"(2% (ɉz5D˅ T Ou(]N" ]g֎PY(.de]WAX< *D%$.32 +Hk" JS6.buLCm0l~~s5OIfvb_~FRAt^#)H˝O;)ohy+:^J\+kz1b+bk 2NTr(`Ֆg!J.;fV!K4Wa%BGb%|@4$xfZ ccIZ*3}0+yB0ː6WS8n"J[Ij-y)r[w5[Lg)eNzQȶ_8I®4?Mia#2O$qSƙގԚXLnoQI_~Yohfa3wK~)U#Eg}?~ uvA?z"e[Kq)*[vm+`UlleӡQ}sQeIT,Q%gb&h}4Tu6,~Z?⵳Fю6]Tءx[~ /Ung) i+ʨxYX\>cx+E9%RSws! \[Gr|!X.\NteIkmybJ &.TH);,mqIxY), G0# p>FV9_}∹ s対Lvpli<7](C(5ʠuNFv_㌆LQ"@4"Rh4Lnu;!}-#Hh<橗&K s esqY"iaGF|`YE[ʶu+fT,"]I$Laω&YP⧒ "?qMm&9Ýlpr~!Ag0.ƣK4+D9IP$& _\H<`%~%}ٰKZ%z01ĺD+0|8)VCXaŚȎ:LQbTȼ\ݸ] r|O CӥP\ZCNŇ\'c.G`tgQk.,U#~`gGC(fRg#OP2*]BFPiQB#f>~;e #i]$ѝܡ੒"6;ږq3BCg &2߳#mϦ̯6pq~)|=xl nnN&0#! i,1(ݩgeP/wAGaFŐ v&nxwC$D2XcO>ә(BlnkD)B'[UVRJ|4X,Gi<.ve|K4Z:@|5a \VByu 07e>労:aR),rD߂3+9+ϻooFq?ޔ4m0/֜rh>8A8OmM0'C$n'}1]H$n.5\:kzzsX:/cv1˘]˘AjQSH")W!(P"5hP#sѴG5n.*kTy/rU&z|uüz*k"11 GJ+WNsj''%.Y6H6~VZ"K&;[G-56B29J)c֤DIe8" "x42DG8#mn 2oa~ÌC uԲZzFtvr8-ڹA&ft37.AßCI=ո*Dc7jгlkTi ~VqkXo u*L<`H(aկ+wZȋA VˆƣտW$tk+W/F՜P|~T, VXZc_#9*tĉ[j6xO5SɟIyBSInq|\П{5QUn=(eN 46}?xyIkj {I$lqJd,l8%aۆV;΀mkq0a GN`Y`7+噆OO>.@5Q޿, 3]$E sMiCx0g67rqf)Zw^}2e4s0HۣE|Zyxq31נužCAP]w} u5QWI匃-Y){02-aQn2aLyaA_%n(?rl?6K0#-y(FԱ<]^-1*aN;VISjFn-eOCsxGM+_Em58/ભg;-ށ5;/A\+Dk"KPB6cp״-|>sCtNl_5˩, e=.k<~BagQݰ&:2i~Ve6hol8>Tg 5̆. cr[ih4?$D\4^AB5nxvVhާ8HIQ^ ٸ+S_%3*dADi`yF95w gVь9) %`loj]bp`{R `XQh+;+م$'N[jy$0̨͕ͬ(D.J SҢH,F#[&@fBY\C+fmBL0Evى*I,, G\l. ,$ͩviS83$,(+<ڃcӐ0Ds^QMaJ4E!&xSwQ!a\H)1C:Pp3r$]w<+_Ʋ3a,p+.dK+2SV ĝJ(0y`a lx@,jHrjL[T `R!LM{faIؼSeZ+)W0mKTذZmP,;ΩXi*p<)0}j|]rKnuɍ. KZ#+#`0Ω,Z 1Ϝ0(8VwkO!Bdԑ,5C/Aw(S@E.?4"eA?}ǃ Ҭ[z; E]B?>"[ڈsg g g6;քC}8PkMR>93G_PiX@gգ*F+D28Gu[h$֭̓NŃgC tPΥm>W]c;>*Ӡ|=o7*ydz0&9jx!W3Og<1̣wPRDpw~2J;hh.Gwn'8miv$ѝ;8Gj/]ܐO8 '5Zn'ѝE螣 KLi&rRassVqoX 4ΐN0`'#ζWMɍ/,7C|Kd=m΂me&.erD gQEO5\1"Z%MNLc!rxVp$F])O]r}S5g?Nѹ!X YJa 0*'pkK%9YΕ<<06׍i}H|DQB!*$Ԏ#Ga# oÁ+gǧjGgЛş3l6vU53X?==?'BuCS HZ.b~1"v 0nnaLA?]}Ǎ>5,G~ QpϐSZz(v#ְOnGGsY{8<2w8pZٞNZ6xn嘷s̨F9 c۬b}z֊^/MUXX'e W2㈔xnqh`uJ)#:AJ[r[cmX TҷD+q TR*pJuܶ^vFvwHzEJ.J>dmB1]ĕcq+͞ZE5WtqUDzB'G큀 6J>֭>kJdbE8s"|y=JY aN^wNEѡy5>>&o <2b7\GA7M807] \7fbg?fOuL~-[]N7[wF./'RzK[;X9[_3hL1-Ab#:諸ݎw < nnMHW.I2E݄햊A褾v;eδ[Dc[hL1ӏQG{D햊A褾v;Tδ[z|vkBBr/SLYx8%ݎ&{tgLwhT"pzn[( GJ’뒼#*üw]|=&O^ ǻE~OpHOKgd1* y-i",X}.~F/&(sce":?;OK|k&A/Wlz| 7~,_3x`+e$i|5uP@3TH閩 SLg~|Mm#$\ ;u~;L'c7qe΋pn$}lL"!Etb(Se]U\൯`Q_'w(52|S+u3Pteo?\֠A3X?l(&ТK_5`-R)s[)*JOv~"i"}Z4Axա6aehk1IƓg @)nZ+`$߾1옺jtk/B ҿ{1Qngs@@ς~ PWO`2"akm(C!Y(2˴9w:77k_bQkb#y7UVU^o1hkRܙo[z Ѭf1g 9>9f~>O^|˒aNS6嗫W_zڿ\{X{Stig~sqy4UPvZƣřӼB/@ tIn>,^WeOm6|cu횟(S}u8iEzD9]p'$]VPM$6\,+LFP98ܲ\r%eA;H$FdG3T@ ji>f39hNŚy&eϔ3ͱ%iGIeF +!4^G/8%Zż*eƕ1XD#ar.8Qq_O M*rr[D%.(G\5Uga)vbUuZpᅌՅצqҀC dҖ˖7Hm܅XU2a1MP{xm\ %gC;i `BYxV(QI! |GH80 Q)XJGu d>BV0VN'%Hл87 $o!_gIbH5gBO' plf@8kBrδ~‰]"sInE2,&P,%P#hg>vڧKRq9^0VA._ /Qc.MǦ^K~-i*/\ĩaf;|5ʏ:=~aoԾM^jo9BUhP-\ێl]`⺑M b͙#O!nvtQs0KjTՆ!6{Hm#,Hp'^-8Ę,ExnEƘȬ I݂A Zپ2 <=nק u5OeSN0!>>L#^#u}r; ru>0F‡kӧwg|vXnpxo~A(~?!q=RM./l)}ħ(FMn v>w>VB>G52&3~=^{\HpSI!6z6ohH#!XaՍQ4á\RB**+,]y3߭~'+J嫲ܵxlkf߲:B`6 : ~j4qRq3LN`7' 3ˆF͈+ ҭK6p 8uF*ԇX?ejbM'>aC6@Hbfq0LDq%5fRRV~9Gǡ BM9gVϖ:3\ ͔ Q82Ԭ'h BisfB}#0'> >n{!)9Z?pǶڇOe}-W*+A 55>T4-x`ek5^?'8~*E#?؅ޞyn~qȆ}K";xqb`!0>ڃ:ceQaF* 5¤bՁl ."?![ôˀ!b[t#4m_CZIvV'N?o-WNڦkVm5aq:|223[P9df?[d*m!Kxy,5&'(x1uz)JYaD9iw-`vv12|2wGHoܤnnڏ;`_Au63@^e컱[Eag[D+[Ɋ꧗ϙ^K~[5a[<֞]ɱBŰ|qD9JSںt/jkF݇)QȕMo򄰥5coxǰvƀ:+YJY5|e=/y[!b~/|t!^O(m#V tj^f 5fDcU*hgpS9 ;~&,c ċ )բXw ֗*E(xz+ p55\WzH@jQuӥx`J_/yAJ] Kݹ߯{X-`*P|`/-!bCi4=tr"WuhHjGW/2Fnn]ګQ6 Cc[E+;`h B W~&> z,qBf}_[ c Rʭ':=pKJ]U9愦uǦFǪpHlolh*eGMeÖOv_T7:(HBgwl+ %(0v5 U bra5-b uh,ܾH'>6A`t&Q)ULڛZjt -- UTkksDdYb5L*e @g F,Kc*&qrÌbN.R7!clqS2+Q]*3M f *aC$OTjO0ߟjrڏZkJP>tY%3s ++[XY,(V٭a?_xjM k;Ks$!Ĥ-]>kqq{T$ 7Y73&4,[g-5=5% ‘TNUH%N"cP*zڠ>>ao*їa%5tA]:Eqw4E6.QUŜiw>)NaE 1u!/ܚBh:D|; ȷčpy ծZ~2\w8i=/|0/=,;W ~Kp7=ք> K<#[y,Ж)O.&LjI?ڴU%CwuɆVpe5j0ut;C]$ۙPjjɘPbA$Ff SPm.M 5{:!6^YNStޠ*+ W6覨`Z Jbdqhpc$}Υ<5fcL!, 0FbТRJ1MX,E sԖ%(˘ey"re>P[ 9~45;Ќ'pC{:f^_Y~}+A)c↑!fqp|jdU@t؛/(Qz|\1NC>ZMLG~z|-߾@8?$2Mz}=FPAF .\kJkJTz_t va\. WS;YLnLasb #Pb+&麌J0hz)2i#ojJ@4{҂0p\zȒƓ]b-7H,g4w'LٰZ{hn[kj;|>Hk'S%>a ZN*C^`l،uzBuU㈟l bRd4.lElҊ[J:ʶᭀlt9qsjm#8Ŕ3 $N(dA%½#;Sj:() ic#՛ށ%I@ OZ&JY*PS؄9c"4bve>6yrI ^O|1w/Zm y1U%cӄ객B!<*;Z%uq@1 1 MSt~Ҭē" bNb;)!xߏYqH8 dڴ?=8#}j58Ǯ &X+Ph 4GӉn%uJH|ۇx>wv_y{?;*vO_[w:JqmO{OI1,l  3< 2(((#g8$D]1*IK.EaYilDq"HSr06זe $SiHM2lб=<Iܚ8c[҆ BSzШ@,\cO`^!tۯv~ RnOJ8yu[Ll>9Jÿz⧋Yjtu[O"}Ngd|9978klm3sgcaX r&XK2Ɠ8ɇQ*(5ł3J;YtԞ0kC<*%e;¥VBla9IQP-Q6| 510T(_&,i(B$ag)jj2 C'b+v%f|}S:~~h><#&3d1nV,'ZA2 ˬgW Q Ǵ'3/1١1 &I;{ր5a!f0vbeG b1 eƈtuZ(Ѵƺ*}VDQHr`?cqgVCt [$y,\%`EiWg)3V] #gFH1#枩򭙕j!l1 h` .rX}.өS?d;5"پ:#z"y@0_<ŋ< M?ihFAfvDE/PFr5 ӊLp*t ]l ,rl̀iNP Ā jqkX׈*5F'#T#x_nCY ;K_'C(˜1+5[<V)B\ |uڎ/D8^n}Jzhߌun4Dű9[y%\ݚDqti W.mCUkřkl T T}윝PzQc# "w r5> _!0/wy}I *ony{oV-⁶ﰋ8Dtl|M *Faf6p>GT"4=֮@{wa;W$dYGs"\<'Ҁp0*Hn7 {ݵz/  :F9"/?/?gjhsnsy#_Υ{$)62kޙs02J̼ygf,cBSlu1vg y|g `v j[`pvM`s$.`EA/RN d$#^1F(8)Xa3Ac`2q= 2 csi.E FT,a"0ųJ@8x5Rc=F0pޯ {XSt:S.w;(GdIKb2`ca kȨiI2(,XgV9g~tw궓R&C٦BZKhFVJ85sXr rL#+APÂd(&q.JQDU.BB,)+M.aע(9 98v*s%aUx9әՆ:WVs34hˇ4#MFPlb!RiZmI]S0VBh^II1CV|J|5)İРuh CSX84;KRL)D- D oCPsNgv/o;sT90,i@ ;4{͹dF0  S %pW FHq0dCD $ђoC9Jr$ Tf切(΂.`/ aa7Of@~fauyF+~2?T w>? ^sIeS@%gp038غ S@jzr̝~|?Kf`/ϗ}2o6U9K7=_:ۥ\,?? pdg2{t=lWj+?Օ~,)Aj: eu ] Gk瓑L[ک(`,YarX±`h ސ\ke#{n7A*J)n!Q3/pNpөҙ+Kw嚺bp䚊sb(U`R<2$H#H`sƚC˔z˔ .#A oCa}J ov+ zRAHQ"gyK|kQb  tf4-(,Z T/;r/'JJSOz pq^<)3U.?%AZb]M\A!4HL0X09kRz+7o.BNE&A7%?@c8ImypAXOF[rR\ey8RwU_j5,n۠/03YH ^ȼpBrݮN1aӭ菵R .ڕ *(U@K%bu27$4$VY.O[FV7쮵Q_&v w!imnxǀ!u5 B9z:A79[_rتG~oCûlWz20+y{#3NbIq>v'BI~/3/їqI)&,LvzCo?ثE8b%/*ѧT"ΘQUe\)TJӋ*".P e2ADŽ?Y n:m ). xA )n*'IG&=%{ G?VIRG*k%h/zINri3f2ͮ3c@f"-8h.D":(5ǝW ́ NbB;.8 v _߮&Z9%_q P(f2*)~niI߷{\W@F-M2{yzTQk5+ͨտ~7z4>>. v)VvA`hP"8 BA?R)Ei d)!uX ]ߏa:qL| )Uxޭ?חd_"2_ j]osFvr{a7QgwˆWqu&II%2ۖH| [J}t.4o[/b-иSmWsZ\=B!릜 D+͢#w ~ٱ`JAftSh6 DԒKak%dVhodΰć-~SC0e9T}!b K@:|\"5K- i4꯱Wt5 T}Ltq"٣eD=b5rL8n8n Lŭ;sM&Bz(UuXts?ZOF+8*%zXz}EPr!8CS\S(̞ےE9ؑk]vKp:D[~(yK@t))ll^K_[xl^:=z-ʟ⳽p䝟{z|,ngW>I9ݧ3*EPn1Z}[!;wԩ7\>~Toж=7ʇ_zy";^|3;8܎"b 0HEgWo}OΩB>lF~|qV{_hޡ){+=d+'jGksUmzסG]1wBu&VۦtG^} +#OlYArb'>5*cvӀEpK#By^]xg+޺Q9rc?V^?x.ܖҍfCh{)b3V鯿Ktu yuչLA7ЍtTm/g *0vhZvMeGڷ=MzvvT>ί+lQ6O`($yhA8et3J1\Hg%Nn9)@.T~%Tɉ)T)U M†^QhϛC-Cn;2GO1IeR`cferT3e_dQ&Rc2(S>c22 WNiyz3I 3*=Ό.HΟw^0%247]#n ߧK8H5aQe&*v 4R^f; &,bu6F[ADC!7 AdAj Ci cږF02;;*=y<ю헙`Ҷ,iDލI I@$(YYYU':Q.[Ђֵ]-8 uPסm;<`\d_߇3髥w}Q+ FC6CTIGY؉t Wa'g\'C926 7 劎&m{Q҈Ox;Um)g|yN"=t68}ftȘ^MNԃ+gz9R6JmHf-CPpJtTk앩h1%0D Jnl4VhJz,U Ю_dG9` /炙@H[1b%4-eF4\o_Ln:;*ƛg.pL<~_WM$fdz]..Ӽzv4s.܌cѝcՑcUus?#\(SwZ}v"-\5WEh_g agZVCu*wwY65]~_oQܼlEkOV4d/1a ZE/|œvV`6_F&7y)qvXjdYfáVRtEaDMR'Wdl(o0nBW )#Yp")r75܌-z[N84@Gl; !l߻\ܪVUՔ2Jn0׫jZ! rL̟oƈ-c6s/E@oyx!g=vb2rezťD 1"ZlQ!Až :6 qS] qm9V!0!* $eo &S_T{åDw秘.z JRA7ik>\b߇O>zG&w&3\eMs64gMs6ڴQ+ f;"MR(#2E F;#7+=DɣWN~SnA[s>!t}лOqɊrۓ%|8z \kGaB2mExD}8x҇ ڣy.J.=.Q?DQ6x(ZQufdilJ82can0̟kGb:ئdBl5ݽLo95'ь}&((ׇre3i)@גAF_>t##|7ʸѼ %u\k@=QA/3"(,QQ[H s,؏/L1 =ת dV ^?mا.~ `g|H4 ŒpDl/8%jv⢟ Lc=uU64gCLs64gCLЖ Oچ:Xpɬ@ Bm^MI =]iw6Zͤb|l o|D5=%~!7D&N'Fׁ(ߺGLi8xe!WF5+Dj?iZo5]nJe> x{nۋ,ܭqy啍 |)ӕvW ±:5'[haBs%/c|K+Nh/_F˷hEn+I Y4!e_4;ivTM\*9U*#Vft2%"%M'}ܧ9R KGv^N߮t%aԈeL%]Z;mh ޲h+cc A4MK$%9h2i'5X 4@yD)1P,yGǙJI"h] ѰβݰbDV]7:aeWǖm"#\!>Zi -e  8e"WJKxQ*O}G86N'CT4&\ q8@sMHrhR:(v؅b{E#1NW/)B%BťL' 1b:iQ$;٪Ճ{h!#Ў\4T\4meV\40\4*wG4!lhI4 <Ey.ѽe+^PJ&$1W1(˘ChbN /c\$@O7)ȠAՐOF!:s.|[5q{l7uCi""}rV<t=AT2WdhybҠEآNУ^1Q@Y z5M-V:݅L6sI_`h!30IMZ3o xO;M{eַM)9/~r9P`}a&W7:: *90M27Y@\850y`L1mf1%!Es*$7i )"@d2'x%@ ^\LAɻ>|gVD4s󼾥9ԷtE)%eQryIȑx" ADАL,jX&yЃ( 7Tw C$6k i72eyጘUh&=ޒZPzHfFE^L*iZiM)Frɟ[O!\v*Xͣ__D{5ڲܣ33riPHjKA595Ѵ-3r/ni#..YCIۥyZRƭw<(/QK,C:Yh4v,,5EIEW[x!d$0G[~`ڌVsᨱ4;ݤ|qPL/쫋rĹWtw =n1AZzug~ヌ(,SB1Z  l4sAjy9G#:g}(i1hDCr|WI{PIFpS~+}qf|[{;-HgM-LV|Rh#woܗ"G/RB Mr}^ނ<ܖj*Gz[~3$(QbD!+KfKfKfK6-T (52aFm򖑤t $1Y46+ 6g$:3:qt}櫄'EF_.ê918[$qh"!&{5MRx_]Lcdd&xN;Pi~End-sP+le Qr-`bfGWfv;γIl}x~k'g_ե;[>,OE ggjNQ %&Esg^bL)JQk]lÿAh̬J d45JrI4 B|Tk'K+a?TJ:`s *!Q|[N0S#N1WZ|B6$ӔoIqvdAݍT =BLZS3&/0 hwst=Szkt t;p>Y 8:SB]"DvO>y h'^gTx:֋1ETݥVB6ڭsR*;Iy4.5mﭔK ʷJK_Ғ{4N._/en ov^F1b+j?_gazQOk. nwj2{Kl?EɍI~٘a =I rXJ%=7oGbhѡb!}Hdi)H>cѩlV/*a0;bhEq(ךd:I؞:ImNMJo&mzaǺ'cRs"1Aޕk!Zu4^qyiwVf1Rug cnol}{O?DH2"`}$O;VǴcQHE6IVDF PY⬽(@`>ioukxAg>itNR3ƥx'!4nm<}e l2ԳS㪗Dۋ_uںy _?Uk d)Zc7 j߾_mJU6y/nu:_hz ؋Um&/e˿b.3z -jTiW1:k dxۀ-Չmu}MZ[|@BCr)8BX7([,!F#snQ-|*SIYHzO#NLENALU61KH($?zdDы)0ɮLۻm1 ;QeOa,f{R]}9K΅:_)|5+cL$r{K'(bXXhR`ZMEZaSI9iicr>KHK,Y hR}5^VOal^zD/j1+nN"zASAzs4'P7MbFv_~Ĉ$f~(YCϳz ~ qAIqpY/Qb<dAIi8-ZQU"W)*] [ii|#%kC'%%{b `4a!^ iݨGcd-ۛ£B2 ƪʰd Xp)*Thŕ{@ؕV\u@pB Ɩ|`.VR 0c c(,/ hTFrFAF" $( /Qa8&mU!E4BVZd`B &^Zǜ^Lwo׋qK"O_.??näM%(ϟ o>K11Sϗx?az렕ۇ1aV3g]ûK6gonwo?m~1 V⦘/]lFA[#x]cM°> k}@Օ/RV%dT(r*1ۤst) !ڬr5ׯi|'?}SNVp͡VlΚA`ФxP/~k-nҽ9XLVl>1}ImK$wB{p|+IhK-paiTx] E '>a2X!hX UAf.G3n npd$ Gt1uĴ{H<`Zr5gOiҶmSOvx9+oKzLgX hfz]T~tȄpKELy~M/_7o8Z\I9!xFkS1 JيR.ݲ'o)ZvijAMU%'#+X9&TҚJr\j 8DŽWW%V2e=/0j:Γ|8+d=Qh*u`@QR3pFCV( ~?`eK!4!\dBY@,\@fr. n'(ցLSH[8͍k'یk?>I`R* H˒U4rg!ҁRZR'+Q\$u NZPTҺwC)&&Ğ q B3VHpT-FQ ',FrLҰBq=p &r2t4<.h( o}zĎ ȌtV L2X^K4 Z cpBQiJ$q^ӤCLe &XYnΕOǻrUu|7)׆_410Vb,sH!@s) )@ҢbZRDV4Cz׻>h_V;oG|wT.sR4֤E$l  (pq>3xkC!1n|^V>ңJf[j);X4xwV`]l6I–!#3BܞHKݝ[y »B.T@t˶B0Q@Wy5DžaʹVx=ovh⻛ӻ3 \bMߋGx2QI6c=̷Xw"ll;c mۓg )taYtڻ شAT{U NŬ0-D%;GuT3+OcYd L/մ\Z[#Bz]I P%גkv[[OoKGuwOS)fpx<8lMdEA(BRp&AG} %Vm#s}x)ALvʇZ\#{w{>-V34SױO"b{\u<^{B`{ݏLpot"~[7 mi[0q%XK?ABU۱˓.z`^V7ޯ`ypr<<)+ߟէwQo/l|Ӣ 3}RBn,%˼ã>o,8>oŦSk#4yh݄i&@hR"X$!)cZW^M@=]``X  :ƃ;7hD9RdA̗7C(jiDѤGz6SbSdqF9\wpV @g-r^߶1ʥ _9 zFES1p \.,vq'zim/:^9>_dGJb\ѹ^)|y)|hûW\*/˟}7^i܇b囧bfjެ~"߬ێ鼨v\y %yUOڏk?ޮxz ǜժȪ0`QI$Da5XAuJ VL NPTƏ`͵/Ԗ34:./$"ڔ\[%B#̘E3*ARt%g5ϵD b=;AJh"tEN,rBеT'K7`KʐDq$ttE )K̫I 1VJd"T9c9BT,B+8QՉxy|]_+X #[u5+^*Svy0КZ+kʗ}s!d碿_al|c=/3FuG[$Oy wKVHySZݰ;>l1QTN rmp#[z-DzC>zNHA{̙tΙzwH3 0AzPuəus&uaKz~J3OL@@}n uc1\ƇVL(J(uzVp Q1+(@RkڪoCYbNrTB Z -~UBbœ>9news>6Y7qv8/`|mfuړ%z'G'l P $? `8&#Ot:D ,DJO”Xc[0rBe 9@ U;H8 X7b KaM` < *o-߬o= [!Pog3ӯ|[3;C% _9jL5g"Ƶ-0[12*gtͶk3aYwоvWzā!ͱg+oMҹn#Y0{l1[ۦ`Jp: o%jy4mNrہ!_LTs3zwHӅQE8#RS$VkvdcO (M!>Mb}I$q E)*:=l&`߽ /ӿx@^p={6f"q9N""Y%+m'Uj@W#ŘȺ}@WIHZ5đ ԗj]# iSa>]pK(vH- d0>ةW[2R )M(АBt]K]rj\MZNk(xbȵ1C|hB^ϰIFfX")Z^~F^i?{WƑ /3ue:Ǝ'2F]-b"iuoVY>)۲D4ʪ/Ir-O޳  6C[-IZێ@ҽLru4Ja=i!cDɇK)elPǂw$Q*-tQ7vHCƼpl, 6[ Pexk=w&;d7D)y/}|'TQکO3F yƠO :dapsnMs~-Ka/ZH9F}Jg$!%M<(5 ZXc99D]L:#wNjDZREDˍ򵅯50N5\yT,"NQ) T= `ZTE m 5KJݴR Lmy}JāR0O W!{-Qy!= ٨ct<=Pg+%EWL8gN).ZsѲz<#@Eo*I3/0ͤ8>$K4m@⫤:f~['~Kj  %+4UG%FO(OA3妶^~;&%a=wp0]讚1:9dK/X3撧8-LgIQJ:T'ȥd![_/uAy>䯋Mz%8r:7v,䕛h2T dm˻ZYѻ5 t>w;i)1ͻ5hwkB^6)n %[S rL3x#,[!ڻ5hwkB^6 kPs?:czdNgnAB[s\ւr=ئ$ጘ *3wS7LUR HN%(ʫFUuյm- I{?.%jNtqDž B`!-6{n6=NըZlEKФ{e4i`u 0n,w p!䟘fCWZ f'3x .(SZ}v*,h2q|)R4:cw>_4YU3|_1 4%^<PS= c@уıvmqŀ a׳y+j =\e62o_"*aP:} WvTt즅.~_TR3k̘1 蛸[j"eܱh;cr5=m9'_ 06Q8F/]rvO{|kw.Mnxq(<ƃT_)5÷=炝$R-?*Gcяt3Z$x9d5NF~:[w}_0 (PtNjP`$dtmo&7S+m9SE2NnH~h[ @) #'_D]M kyOJ Sjb)0Z|W8(7jt)pxOKn蜌X,=4 {:;ױC#zOǰ2e),Lae5,[On4uU\9y+-jG&S) Mơܪ8)s}*("FMt>Z "6ORtE!*nOLBlh|=w-6W"XF& 5L?/cX7ԇ3F7 a-R N0cyn^ZbE٪iޯ;Nv>NZqGϋܯ;ytŹtP=-sUmQx#>"^0mx-9SDlv^$= /_CE!)l,cG8 c΁qw'-ߚM:z3Q'VGcFV ]akFn?𚥜=TeK hT iIhۛSG/LcD<;Gθ\ޢ'W0qTGo ]iP5^TA3q{H4[$m ' ao7T 9]/^&#LFxY5jwBp#!&U*p b)H S< $"p5.I! r Žvru Q I$ԩB+h͉CyH tz)^6|Mm |P-:oX@-/[dԄ$N:R"5ocRZIB qqSNьGR%O^*/j%vWylN/S&4ڐSFc #\;Kl678Sc>)?=qQ;-_p#$K_5C¡о0p3sƅ<%T0º;@Fj]%u.>ƴR+ ?啤&R~ /*]m}Q2Sx0Qbe'Hcz5ryfЧ1]a4ֆRRv]>EoG6´2tӒ!bktN7`ÃSE f(_u.z]e]ۢt.FK-JkiT(^>~޻mWC <p.t C1Cdh:ί$U K[Jۺےg2; &٥Sx7<r0({0(&aBKu_~Hjof-u!́? =0w P㯮X8Ϻx_=)q.^"[u[+eMp8k N^Zw) [@MN1^@vK`!{#k~un1Qυ;D1XXٿ߸w@OpY8x{aٙADto&J-$kiyvZ:Jpϥ4FDM~DAd^c!bpȄbk<ސPP!Y"U(H45M = qXV8B|t&OpȌQ@]|@, yjr%tZ0hRH(amvYܲ3Vaâpj$VJAt,DNЄz*Zu(F-' 5lYDzX * _J 1RY)eR@H LD\X!1䔴,H>:{)5gt]YsG+cPuW]+΋jk,;7q@jT_8c[2+++O<ОXE.WInr6[Ks,Gs4Dr7n/_K}/V1uJﯖ&>޲:7oܼ+8%=O"*mwkm),N?/~{XyjHBrM)%o>n<;R1gn_9.M'; 13X1J,rWhݪS6c܁cyf)bVH ImF.&G-rri KcQ5!䣂xCμ~`|AJ*CWLi Sm0,J .#S0z1"=l=&zڏ1КP+~,P_Lu0͜2Ѐz<1 .TX[ˤFL[P=&G2[hH ;m@[_w3 Ac,qfQW>bshwJJ*pUgjJ.*E*PQ-H!qrQeJ:}`wJ՝n2Q/˴A#9h LQ4dNZjTv- %ӚV[PrWUփav,d>'Ő&}Ehңt \n@3AbJɼNrюb5AXA5(pc2SfS̵[Pv28\&ICtXc2"0SY$2=09EozJڒ(R9 .E .ilVS]+Bs 2rB>NN%Gr#6KSu}*B>0QwUVB"Wv Qm^ݚ\DSd*y^Qjxp9-J-(у7~(FGpu)bh$D߱D :%>V߱8}A%'k@Br=X'Ledq?Ispn)n@ZootՎFރFyA`2PjLAC\' L]w2Қぴwuގ!bpLh\ta"I,FFO[iddqHEE^IGt a92Q6h0St&,+wu{G/-Ff>r׋~ -eF؍/3s'';R+8!<# ٙI| wmg'ɣIsC`+soi%HN>=cEhPp$ ŪW'AOG0By/OhPvZhso3c`κEݴA.,V-LzES$-XL]2mG@Ԭs>}[:D7G>x0pjb x38* L6bJSn4g_|A!4 ]0 ѱ25- f.'JCX1SՀHG<4CΧ}44G- oа;jxWX#}d@3޻v:@KZ=Ъg4墣|g\"b-!|܀L[a ל/G.H_M&dnga `Bֳ{7 %'c;-φGZ+Id=X5i IcaN| a&yG"\8Шm|:6+@V՟uׅY_>"Z!Mnƌi$fcWdEfb*LNgR­_}ޖxg׳t1ηzێRt6 #݋7ʧe?3GKUm_^G#ڈhQD8y, h^V 8altGQ-Gd*dž\fDp̠YhKGXIk 57,g-R֕ A`iJVvQcA5SPܯy㮛*NW R|ƻn-ٍuߞ?8%ρ}!u~RBJ UեuT>g dN50n̲ LoGlH$QvK$T]sXf[^Vŕ.wus=^W ([\Nl .YQnF7wۻɏn1zwrXܞx7"rW~~oTvZMw6\Ii͗33`{GߝXbį|-w6Ȟd/Ob|#߾;A^{ s^6ʄOj?.bsν_udSRz*_I{d+w(p3ùBr_+ 'p|]+yBD\r*rJ6Va -E ä;Kdغ`jX 6RcW~=&~WJ=rPծZR"Z(KچoOV@e^AU'R8.c0(}ޮɌ/GnǤ%o}7go`!H)) ˹v3&3wo^N'`zO<UeK@`d>IQGrɿMÝ}S$8  6} =*wQm6(Y0jh/61=EqZz>y>-i2ͨ@ȕ|=Z A+GH3־5 R'{+Bt2˚EFXyN>)`u djCQ?@G&jXjz&AtlVآj.h fN~'^mc.I\j][oc9+f9UPU@lwa.v8蚤*q\5>o9}c['._("?R,;LrCW_n(,kK}3 Veʵn^6뽵EJ 7,oVB{NvkN>qgnn'3#4ݻ)#XJ.m.K]5ը?g:5U.3)5vYE{^ŤlϪHr>?$wݖ*HrRV@I_ԓ!@,fZ=/Dc=â$z?3?*:[7z$GnT7M5\z֐j s&*E)Kaݗ7 ̫i0)Ƞϕtcي'W[ϰuJ~{ɮ9Y!^hp7dy8gkﱝ$ 2s+ί%ƓVp3Ξ ;=fAa"Q5'3ORJ#nz^q=TY<~VZ|wW.CBEKq/ۡ7-1_S'/Bup1=x nNLHHWcFh<#q[l=AKj}(ᑈ2K:x&v5 n܎'qұkvga#1@2!u# $ 6¸tԖYrCϏjlrLr-6lhWXUoC`)~۰n@Z!A:fh=o̰f }YzY翖}Z6/EڻUmÙ=Tmrii[9}{+50V0 P R8̃B323\VA<هd FC ,ģJR!.ti=BSWDFcmhK$_5cnl)lɭϘ~Wq%1޴ S"%R`,',%#K>tys jHZI px'|WCbtiEG1l78?cJhY bIel6)dՄ簮n5 JkreՍ3Aa7n}tqh#h@.Ngjnu< MPnj+ ȫiɀu=\ٹB4wzo:(1lZEzYAUg-hR!WU Uآ guX5\^Kֺ8r=yE6(Γ%=;*%]'klOt ̰AIB 036$ZR8)SU2(N(E)*^`ж,\K <;M r,ra_1' ́6S.f|um'<<ڄ@6 k6 ȍdtk[>n޷pJc8cKzO\aycBug49WOv|w1}]82YlULrnl&h䝗Crnrw7xu?·Xl>O8{+.U'$ Up!rQhӓBޱ\T/u+K:JWUhPnNOFunn!h+C!cFd(P\Q(Ai( _OI98ole葹l\&kV$ KӓzV HʀL" r Yrt͆'nݐHӝv&'йӅ8x-mcNWuJXh+hMG` 54 }򑥓|~b&<{_ڛڜ&C%H, >2W*2>m!l). tR Dm?8UlN*o9C1V[+3rw1 m/ >)9xŎ ;oVoGՕiic8a16(KaS}&MuMJ< Vxx)2.6O+N^rfLEj<9˘O;aA:me!( DExDp9(<5əލ%-)m&|ѐP[P]/@ɥ*iCa ЈȎ2*K\*4&@j0LdEk_1rH 2^hUAEf1~lQϪ/=no82+$&!%-7p^pӬ?i'&̾=J2XvEoW?6ϫJB9 AD:7_) C,P_x!o6кZgi {xԔFĵe22$M7/#qS`-1W-|b"c u]F )ujՒd՞PsjZqSj3NhY2 =9'c*㑂G= ,ӏDL\|^=GGu06 ǠɎswL0^:AAH(4Ϻc-n+vЌWWmjkkZqȀ)QAq+5wZAQMe >VVJۨ sw"%j:{RWG7Vm l.V@@D PΑh-??9L>8J,VU5(mnңuRC(Cҳ\Hr!gx^rp==w zr>0# U(:-ƊJT~'n~ۋ?M.;}~n܇!6ξgi4:KO=zh,}qM8DvWaeoCN^ z/vw-iVh4l޸ 8b=VUUcm):UrCs(a ҧ%C3c0ҧJA8Vܡ7f!J@Ti1[Dk߯a}ʶN A\QXoτT!LY:H`k#[z^-]8{n9i-'jR菴1d+},Z^SM{*4ʰ\Jߟt3Yk٫\24.qO =B<4K/t}K9iQ}trNAp,Le jHpakwlGC ^IsO6ڝ ¶L̫.1)5l<ոbAra-[(,",cRd;S (u#p ,ЖgE̙(ML9ӦYTZ7eř;L;NJi4C83Jr /krH8&K ݥY)j>BK,2G)5+fK,ICQfrY, I%WXVc9:3cIqRL2 \sBR 7$ I26:kH U--q %e PyA%`a\IRsP.E&'\>Tcb-^v3tY]$99bd5EY2B5gKO5xF)dXаwr9BKj\Nc 5kX1'=m@.-@B9Ov4*>.Q'K&{!{6A2 syV%FPPrւBdn3d`" BU1Zn@h,Ȝtd)1}IB=sNRW^ZvPV'x:^QDӉ@xبū`@#PcR>e#sp(gAQ5#f+ 1ʌ!h޳(7$+awm.w3K,93sꪲdI%pխ%˖-@[̬|T[^9; ̼˄vK㑇gQ$ְbS.nwO3X*iɋ.HX2 y_=:MHښm:4[u3gXe<+_PXݘO&.9a|#R3L`Fk!ܛ oVV"⛕'kbw\9m}rQЌ)h*Pl 4fpAtn, cw11":%(`E?\LF2g̩lvQY)\{y$ƴxSݻ:]o0ݛpfkGf"7?߄EI)عD(Bbm­/A1E o8F8M“g_~|zf:औ.&YTbMS8s C1:gg1Ck|,. |HfBT͆~l8<vh4I*/)NooGx5V~2i<c݊!8-3U$nw in=|ol{=#$ض߹ƯK?/ɗIH$}Pm$$[ձGL=|ýǭ'{{?? _ctBTgC~6<9t3|19}x'#pXN7iw; 5nw(,0惤dɰ'E钼:6'?$H'J^>{`"3ݹ~<' GxpMU=??Lu&?ŰWJ .MdF<#|?{88NQڨ- y?̴O?68_濨Vg)ɳ'/=}$waPOFf/-y/z?J^$ߝ_w4'^y5Hg!'oHY0POb䩯XmY Gc=kjm%8}M[4>װfnK RGҏ, tGG:d2ﻹK\^#<t_'x3!}CJ9>6­4d\WхUjɸoe`Ou-! N|?)vkI~ VC92pGŴ~)Gv4mwn'oo{p8^ ytՑZ)?pP'u:߽ffU6A/&ؘv 1ӡ7 y~=ϋH/?n,||qvy3;vK38]&o m~gNjo?I}6{?XyoNz\ [糏fss֏XҸyǬ]] N9O9ο8~'a >gE@D[l={ʁ׾:s.'?׀~>o{P ᇊ ag(9zv o<_El2~[kJrae媔Wfn 4z\|_]Ϻ{]}>>>m L)iJ?dZ݈_V-V]M0F0##g{,3zDdTl}wgcIr$;kMXȰ!e˔R-I49«&A&Uu8Nk%y4=:k 9BԊ}*NEQ<jh3I6jTFjZVj*-W[WS~)_MUz@xoJW[ oV\i*~@p{.0:R#N zj J{15Ѽ2٤u}_7RĘ&@|:ZFc=ﭩT B:@ƨmHD[賘ϞiwX D*,RL2Ee( AYLXo`ɰvp$ iI:VGE)ưVrIRӳQ%7]s|[1pBc7_u]Ȓ- Y\GNqfw)*1ӷtЯ!X%6ذ` \e1*r 1UGSiHT3!erAΌD %ihal!x*,!cX@Qs++ Z#)5ʘuQ#cZ!5CSפoCq?ir޻p@mlu=8紐B\ϔ)J)~},KcyY ! m}k5Ket& b(11S"ig̸y`O V}DX{iܴK>ɋbXlL)uB2:]Yj|R/y' CBw|׊w|ךM[7Sd'_m&>ƬE"s㪒y'9g]RO/p;p݌;0H3 bg,uW9n=/8{[1{[ً]7_wBfw{p'w+[c#<2o,V2o,l% ]7˫QNm+ԶbNm+Զfsj/n ~m5 qV,Պ=[GskG%x+$U)v6˼n3z@;Xda(LH2ƮW_Dlf7pF}>7R0ztw/kQB+\6ާ}D Nn^FXkBw7qcM)kB5ޅ"K٪uK=ڜu7ʕәc;$L*[I5ڏ+ˋ%[VY!cWY2lSfP,GҙM@5ٙNd3%!."Ġ@NORwJLk_a&W F#ÌOdI) ^s O^l$XҖhŀi1KtAC/p4.F[iBRjS/3V-Bp!S̩4Bx\`lh,#nr2@HX]7AM^Rٲ6\+X)o ++z5artx,$.N.E%s\Ar`L:1h*RP@BFt`B04uϝ J(!1\htf46NP;3y&kZ܈ OCYZ4ipC}?3搩[Ւ"SX0b$6&dS"\g+PZ×0|^H\eN7GX؜riT=gZ{㸱_[6߼ $e&$6L؝@RzXVY~%XRV⇺d璗f*Wu8!%+iF7UT> )n)xrڟB 9#ڏֶxS!JdS[۬A bKD.K<~]=UC\,VЃt !HܯC/wGˍ{yk=A1U-G*DitKMzs9 FRLr "_uITKTf9o>퉣VLhVPFy$N \iAZ(Hm$(r݂@k`䙽13yGm\ zLxYmw-zȇOo9ş}^ZԻNvoNAjwZuv\ӾA\FMӯeB*'rB*' }7,CpIJZ@RB&Ƚ&M8&&Z;" 1K"g$KDǢNScQ)C}!J%hK9pI44_zI 5R+ H;y1^'9 (STy`GiDG RH-R@m"p`lRD}:L6Zޖh "K_NLA HN"I"I0AKtD4C?%Jf$9*,0)i1m"ho<%oK!sQ%#qLڜN9LH">K B7^Cbw:Z%F(# y'šc *eA!`ɕ3foqspED!\s&8` g4 r\T }LR&]9hȉK,*hdDY9"|DZC$|5GDZB')iH1.$kA@F' ==;5|)q Rhq0. \!:h-xg{[LtqUF{Oqn7YATAP"ǁW b(S.HpI.onDB鮘%xThD{%o(>hz;ВCKG7{5BK1`C5ovC<{3Z>..{ߠZ챍[ ⏋ލR9Awd{Tz$wO2 WJzdA_K}B%xVxQ\aSe pTBh]hb%Dh&\Q j8Ԁkg`fh X!Ε=ޤ]T_0,7ZhAn$}\L*. Dd2RCŴReTȢ(FCq+&F( h37uH ARּ D}ETdVeU/F/ ZV%/R#~K'u}qYX}D=MʐrY>dِ )%. "(uJ+WTq \ uz3WN%)M\9U "(tv2+,:CEtrX(- 76N>VgKD@o2O/7VݵưtA0rN-]C|k0^$oùѠbx<=i41crfo?ٳ)Ũ&d;~\$-Vl꺸Ac2| &EA_MϺ?MOUjmh8m,J* Rj$*w6_c=E2"Xgpz2ͷxS5f悚sFTRFd?O7&ǷX_$f.%Fonyoi6BlZr!7b=C) vHBiAeG}}C dPLjA]1ipRJP4V10:;APP|"]sm8l*O_p~oێU >RphajxsKɐ̽o5rj+zF47;{A5UJwI}\q;x/9FoZ-`O e`^!I6ھ:ѳhDEi@zKrDA|P}rK21 xRe#4Qh:X+(9R! ԣG8/= 9_HrƩA h`1&$V炃pQS]J^I_.暁.w4X~9d lܯ`vDzXP9hJJ*Ȋdx#) ۳6"gM*ʕ [PՈ?94MFS.) :x]݀6q**Mf9~( b%j+F)/5E_PqN,{Ddö:Q_!ؒyFxZ>pI8:>?0{l{u>J}s=zr/˕]_L/Ŷ,|A-JFϟ RRS }w}ә<o.O]fݵe_MWO_y{c,^Yr%N'/s6Ugz=ɟLf^~ֲ7_&H8?3wl_T9z~F&8lEp <މܩãw߾O[KMS@Uz&*jV[j}O7G~χo~|{'Mhz~ _xUy^lS iu>o\=N+/dqq&[>=uFq2AFօ"]thmy-O剶<і'-gt0K(tv~esuV mĖ8q~"$ 7+cϊD^ZsRue&32 6D)#J:1HqJu"i)"h&/˿4idь8e h0) -(QV `iBl1볃*3v'YaȲfm㓨߄T2BET8mrm~{ .h z;hYPa}H)Xdo+IC-H{WMHT-HLZc%T͖(Ni= AtJ3ދ6>,%FGIDٲg5y.יY[/2m~Z"b9oZ"[,jj½.n%/#71୺aeOradp܍smTB"A! [/2[D!{^n%u]otۼW0wtU92<>>@&tK|~}+%VбWO; ]!jᥖhtQ}~} ` Ca{PRyyk6=s!;W]Ep<8`W^  'h&#^y{eƷ|OHez{06|MYp矗"#8'AH3s+4g7q]]ֆ2UPbHp;cߦ [[tV=|sW͠ǒ̵}Uk֢4ݥ[yE4қ⨗8mnAC`$9m<\̕ZNʠmT}kbX 2xШs1:yOvRzZpPﯓ՚NCpViyp*yWYV[N $Ib4\qKxsKVN<ܜhJt5:ɼ=q;lۺmeMQ=|jE6:kYV ̔kXNX>Ţ1'W׽β{H\yv3sFپ8239x;uwf֩+T(U<012@F^],Jؤ*C$PjBWk6""QZ {HxGN UPhv-[\>F=@%%--aT tGUKq-~M0K&?k[27$oa!&ɣDu4VLH+A3xۙ%'N1[+ 72孹'츌_p_l wo+U#|;>^FG^|.8Ֆ6|.w폸rRo|"3~\BҴߕ/|<8$XF+Ͱ>DPSQqnA9\Hꍎ GK$QjfvXĒ " !FظCoӟ=ÌfDp:/LNCo4뿶 lǙ^) G^Ȼ(A2{ϥ,٧g/6ps#~>N7^R[z:$9gR6Y{ִvo.g:pms_p\W75h"LUE\28PZ! GG`kJ!󫺬Rᇵ1T0rb[ %1F+Lvb9%7KvM^Vq1ZcwHT+sa,?uE}7KVx|l@62A8LLxƭf*~v|<ʊ''=%Q1 rL2Ei9k"hux|s_?z .)NTYHb`[%EoV 3ʴ x$( 1Kbt8Io g 4LQ8ﴅ-Ykœ|EDpֱZD&lwd%- Y}K,rpAx{-1Lh뗱9+ 6?s=&m`<|^zdn\xeKkZ{_·?x1bdrձߔԹz|6!]U0̵ؕ%|l6g_/7/!N<e`Mvve0.$_eY>MF)H'f?gyo7FG|Owo_|x}γi+nwW=vf"s<>Nn' N3:ˎ l1~})βnw|ìs|4?Y ]W~a ыo#36>^~&sP2}zuɪ6- 0OKNG<헍vf*ß ?%]ьyr&aV2{ IyP?XؓyR7kO([Qt1[yurwEOٓ/*S}=Zq_G~{o/{v? g_e__OnO tjV15dC &KQ(mNǥ,@ΊZ9m"WߘF,Hdw-'ׅ̺0;*pDqqհ?^lU O 29$'Kw<_|5Wfy/F%,"-{:ey>r .ӽ}.ag e]dyko׃7íoI/:7 53-38[?fp>?fa*{ e|_vB׿ qn viڎ¶7y~A^…6e/:#[7,'=WH͋MZ^$3sNz5}z~wE|s]~&&ax'O/8ͫԦs矋-*)u{5f2]lݪ&ombx?1܀۝NI7 ;a6J kZ8twm0Mlo-E51ߋ=x> JNA/_ubpg[~:HF^1r>~3.K˛q:n|ՇaH H5y RVpOާV~{X͛A KuzJ$?$/?w->'x = φp\|;C8Z.:ntؕӏn~ҤIBt?^r=8]l`3u|~v*=MhMUDZ_˳#V~>Oh|2is4r}q֋;Ib![%KW'T(1[(c|羊%5<QdQqVd6Nn~nIf${x%'Pޟ]Z:KCU\U.ïm8ƻBzqZJ|1w3黍lSͯq/B ?姳Z4h-]Լ#J4L%1&GH0GZRg"%ˀ J8sKuCjbu1.{_=D֙4r̦2#^.&IE31˙66W);΂đ.I$Eo^T%/]&Fk8A놮1b4N-Xڣk9HAKVOlu F!Z =FSw|0K0<fQrO2)֞-B-F"ņoPFU nAj=9cJӜK"+v-$Ftw%ɓ?89*LfEé>1$ ~Ч@ƒ`,d҅JLƮޓ6qWP4*}p,*ɇŚD4ʒ\gbq- tJ6!>OeV^dX D艀p9 d9{ kZmB#oI2ʘL:^=`uQP#N4G.:(|^iC-V[nr Tz(Q|ہoBrce%Y61 2+ji]@vW1BBe*xp|^F:^S5 Is$sP AI-Ev{x9ڣefZe+8+Gv`2 l&yJG__O?rC>-<3 z< Orޯ* #eʻ䒛Kt4nI:]nHq,V@-6pqn2dt6g"h53 lnxLGI_Wq'O8nn_q\7PrJn G/[sgZW[FZ_4m,ݯ33s_:J&W]O깾zޅ cC \BEڠFՐv5\jrcַ(#3ٱ'p]+]z)0GŮm1ߴ߾wvְ{d{FlKv{s·){j K흲S6;e+(+ ^__WV Œad.(+Dh o]' -+ݲ[w˞-[a';]w!{^A[)\=uv+-ݯ ܯZ[x]b,νy˵w.r!PWʊ5$qy$#ejo 3r*mzgkl흭΁]} ؼlnݭ_^鱳_+d#a>r>/&S*20C3/2G/ ر  0"|Aـ<[fbIcema >zZ f~1/(UѬ|Q6~Fׅ[&[wsA3/V` z r1,.\\YޢE%( =\p@zmFSfۘl\6r.0IRʁM1(#*i,m ' Vğ_m8| P)e2) G_ĭ=|uvqR6'B* Z!a<|jͬ3d6HZ WW. w@Ό|П._~_\j6a X#YɕE[P[,YE_ɗ #:n7JI=r y4E"5MaoӔHILܹ(LF$e 0gzZ+/dh95X=)o]75Gics<"hrL>vG^־6tdF_4鲶st:A19]~1S ZOՇ/5a4Q^&E $Bކ<6*N> *Qk1 ھZ5𝃂:kPYysX+,;1{ o%Au$kbz:vj.h[|_ ʡI9v@AGA <cgӏ^#^P,c _" |1Hة"^fr[N19+mɩNidE~58RDu5 t[=eQ~-MWg_+؃'b>~9[Mi68);+܅2&Kǜq~`&{k!~Fgɮ~[avӖj̤ȤḶ\fl S -B*M?趯-kNY[V8:,.וue}]9ו2GގXUԔmeuȟk=Y|n뾖lw-Yc}%Y_Izΰl*PE65d'!{pk_Cא5d9#Y:a%5I q)d$(XXȨVEԏ1&x4u|hL:3nBamqֹ_o=>#IHmHcRMx'.i3qF5I (r$x!xEACrkciL9nno3_/+d7kr"|Qh|DX!@G!d)-kדCtt^t%"9e% ? LVdTFB uiӿv1%qUB)\lg+ʨ>:kHxBL[n"\ͨD+&a5v3ɠYC(PY}N(ɟ8B ??ͩ?)ph\ؗpJ+kh USsJgdQx)Cc.JZwS\HᲃK7[v4?>(30H\klorOɋ2tZۇEh-N|b%P,DˊP~~TG-jT1T*ZS1PaC I.īu|oVW}?ќ.ݳ+Gռr褄#it@2+}E,ν5$Yp/mBK`t]J,= 5avw=/81ݸkGX#:ɨ53u R76VʦSD_ )u-V2BXrnu^{f?>3Uy^{/YH`bwW#C$ar·0fi;]A2lN& ( 2#ض(;flPHsU6,Y6g6 DӎN\Or30~sa~䶎2%Kte3ys؛u߅2`Z"3k6EnL [t>wVq៾ 6ūE,aQ5 ɫA ɯbbGDjO!brEg1G}27K%%+hRrIȠ8Ç- +K9Ѓ*DO5Im==΀Z줈sNYQ]b_4}1vDp4ALl}9g<<\Bk)2l 疃xO$Y%(h<(s??J:9eR+zza詔==HTal Ⱥ1^=挙dAͺuAbRR 8g;Pq-WW=_R5?.tߞ7$5ɏ돱m+ @AQf<׍߁ P(xcwY;}mwϒsc0oW jb pT(hб "sqj]5 uF>_ԵbŃ5tfõBY0R\rpzV) X1 ?{Wȱy)LYA$ȳ^m[HSnIG>'؆fC~$uW1"J!GB\a.*x'gA;3cN|f5tlŌXc\3:~U\V+-'Y?w4س*_6$ĵDu^0qxAJH9ed1lQ2]t:h7;1f jzydreXqt;x,}0J`aDTE ;Ft:ЏA`t. Qź=d!&m(' X`R֢c ݴ[cOĞ>,l0D23Ȧ!aWWN m6,WGuh ޽\nDqͮYb;3q~O&9ۻ^AN?=T]k:s]Y&=ޫἝϬ~5R:цfj|ZS=+$FYJҨ$)&kd&HYz6b]R!(#zMx!W^5m Vg )ibOU5uY"YƝGm?nݰݗonsy۷RQ߈7<@C޿_^=T?Fp;ݴ^/O99Ho-/KӤg(+5H=4(7tk]cAlYVFf l!MxI &r2D%c!o*IJu{k#OAo֌ܴbfx}uWmǫ5yg0RV۪5#?]K?u'Jd Kyem]w4I?Çz[j^EhQXXg BvȾ_-Оyyoͣ|jyWTڊz?m۪͜ji},}yˇm*]ݜZ ?~y o~[ۖ'ݾ{1Js?7'L1xS@jB9  ^ +8rm˟|<zj?;'1'@E䤲m Xqp)H%+$ i;eT^;XD'i R⏩+TTF;Y{T OexEvhDNh3ƿXWtAO][gj' {vZ&_CIo/iH1rÒ&fFɞ`3eMF"y: ^5l2'3-Ё* )|kYp|WKT@;ѱO,^d9tvX~4>t)vٙ|| AwS}|'Azѣ ?3oiQ-8u~G@DN@yQ,/ I!%1!?\zuݪ䣘37POHN^bxzpcAPH&SSL.芫T ,]ȄFB@W,|E Q,*ʱ5XtMqYp\ %hriV Xg&ѐVPha&,"dB6 de^S?VBGgțD d̎#tKP,S@Z3Z &\ɱ^/6&Vy\_>߆#Q|cИ3 E6 :#ۣ1NrD+6Fq;%`&Iֲ#PEosaM&JD1Q䴜f#Y>ݱW{Ofǖh,-tx;B>>$ҶI'A/~__mB;.r7_I_o.Im_g:|ͥݯ]i[qѐ[sYMhH݌(>V~&/~TVi+\"VEV#xE}=0ٍc~OM(a(s$,1ΆvenLKJ'/ֳٕ@Nr6Q>,dqTLyG8P:Ǡ\Jvݘ 3jAKSyM'}=;⼖ԞS{%LKzBaRƹpƸ 6ZI MW+Dk3H3Rιd"h6ӈsD|{&ׄ1>[ Z;xI)0Q SPIBN}l?ĉgD/c*b˝{%!-_eZ.QGd1*t2R~)x钇D"&dUFF3nUΚzs%ʰl lGu@/IKr HPt$鳗 cW27%YbrE3fq}.TPAC8ƍߩIw;-ԔKָ3(R)Lx+#ETTXr:d[L!I@Da=oI?K5¾HdUU $"K椯eXw^XɻB2P5iRIE Io,&Fkrzc0*t`3,?nUo;,W)]5|zkAhXLG_'8"u=4\ӃCiElB{P1vl| QdGf"BWp!I,k\ L`m! !@R,AĠ@ ?e$2`:ec H<{̰Aa"R7RUk2&\ !zygJ{bۦ[k[Z=!BKkb>Lpj]<AMr7&yʌZ#ӴW-L5曫x;ݳNkqUaGr7xFÖ*Bvkf>%ElgUһ\5%SN i}(J_< o͸,[&>Uf;W߶g6m#/#ۯz=;KI3f%QiFθn~ u<: ;@;T|EZN=~A85>GMTˉzJAvװZ yY;p%,t{'M!!CrUMMS16`**HLܴg%_Ɛ%I,)HV?{( \ J4`JJG4ؐu|u@?$BK`ߨÀ?,bƘ~Fm&=դH>Q|5l%7߫pQtāĕV!Jy@ yK1Yڢ(b#Py=/tk[^h!Ĉ¦| l:F&H #3;"?\_qucJ"ߜc>ʳɑwX$ir޲u=|,rZ霄LI- q6`UgQ@ h@46;E uB/#'Lpvzu0I1#MGoKѬŹ]1E)|SMͷ` @QEo ô n;<)ĉ"a>6X BtaqP+&' ǐHWym3{gPB${<(Oj]|łU:둜op, /Ӓ]DeɅQc Ws#3;~@02k17+F>W>wB=}ԘҺy-%T` 4Śqt,ץEBD]g~ز|cmeZvύP*º˚FZ?"1>O%ӌ>Ӵ;1Ĵ=w̨e G%81Ln\8nWoEƸk[,{njd-T%vvU'kfiexeD':W$S,`5C=}]4pn.o~ɔC N;tI 28,cbTvc 3X>PP@,%d?H"JVn.fcai=d30S [}bf9Gv+6|%,>{u]f2Rkwj^*ª;Vh@~G*oKŬ W` CQ{jwo0!M;B,E8Qzd"UU\]cC !5=p?:5EPtdV6䥹CYW`RuvMQtESwE@ZB4ϥս0?&(c2JO}Ur4\6Q,s Boh)GPէ7t+Hn1!zN.VyXHmH-trMʃ:łҭWZ%rNfM1=$vRb{1m#f~;z&Sx"LX݃_Φqeݐk.. 6 ϴr'QbNOmjSZVu 7S፽T'tFp#9צ[E]JFp t,8>Mʋw1I<0j=֘HkxIO,$fozHoXcUl/~W|l5[{-pV,_X,B0fQUpR \`lE$JؚqTvYUsu?^:EUNDp9j(7D \10W aPhRPj!.ӋXmɐDŽή )،P]n&7^ lgeuP#GDXRD98  yWRq $#q(5|#xoc\珟˯rr&#tyF3HAB&rr._՗!sokY>Q\v$9#"Rk_oF;kV&ޮg2Lɀ.ή0H "$]g?lg4mv1myC0&ɮWN`L,dStw{!(xm~y6%Y N%,"Ƅ F 5W,aj$Ke(P*ϩ^wjr:TGޱsQکbJ:59/859_!:oy!q~8awv$9c"JϪנSgOBjE^NՀqqQzp rV{ x2IhTsOWN`O.1Fه2vm1Jqm}燲XPlH㡱ߙPڷ{t1.QBPkC\ DŁ Yn!)d,W;2TZεz&I"ߜ])}rn<G'=\L|b~҇T _OzU  ?dMYg~^Orvqd'fr'j)\Ԙ3/( !} qCCN;,U!罦"5MQ?\o8dWӳ_]_sq_]{yi[8P)waq}Ky1;y]Zsw* qSPjB.ڷHvSkО||6e,wT0T7d"xf5V TlR5I QZSDQ3![=?~Mn斪77Ydn&*uO`ޘh),w@?'|5Y}ODH0S#(=H 0u #1`Ujh>YMڷwf;UFQ*c U 7@C`QmܟTmrr 7n|OuOZTUvrπcOa0[H=huP>]~;q\Xq> M""~s_܃}s '~:[@iWi)@+B*T.6MjCs"XyIlyQbUSBbH-yy9Oha!rpz盷yj '['R<:?AeΊq1mZ3Cmj0k[7ƭԥgڶ#NFWs-˼=&L"=q@ MG>#up4Wk˘aj? | M;j2oTOEմ vfD 0~|rbmj5 Xqx jڦQ!3*$XVݘx{X ZAa3z "Mr9uVYK@BB$^ ;ATOEi6P{9@$*ml4}-x00} ssSF>9m{{_F;  7EQH˷)#iUh嵊@2Ԛ44z0>8QVe]Hs?Z6O)inb+0JUNcOn-w(#Kitn_tێl+,6FpmP0w`"ߍ+Ab=Se-^3!d[)vy`#d`!צ!J߮`- bśٕ)!M=6k"u$ / u0˙ܒ jZ-?+W Ud $ݥU"K#R9XWD"OZiz.D_e!We 2@uO$c:=L~.'fr57.k까[fL[Uif#k+ X8|Ğ=h'U{uy1ĈFW. &gfь%wZҴD)p M~U,VQYPߔb)+Qhq Eu~Zğ]!b{gE%ӊ`# vR|c{]!JiWi/-v))2R+=ͪ??Խ@g6Ԣ(^zuq=wR?iWQ]H䥇}4 5dMsKUDKQp `aOjuܘ3WS}Z(e^sO@lUjeɥޖ^~yFl{20p_g$J#s&U.՟7bU Tx{U;<[J|?4 J掙wTڱJ ں dcLHJfDt8V켺iHju~sjsƿ9;;ځ^qKy{T c=p_cFmoKn^2:LZ7X%kz̯GDk sCN9B[pwsm+{ٔFmE:^'3 eS-+[ώufEȻyyx&7˟FL%dX#] lj~ k.E[}x+kF%Y)몮6 5NҳАNI6 袲!5`Ws$uTz#1Li#IsIi4B]lxe#wh:-QndB*sOч/_@Z$+hdq=/wzp3g\h L𾤋_:-`ٳypsTY2ը F,tuZ($:c ̩rrNZfcAsi3KL:bTfvBK4̼8G[!p[؞BV@g;qp`u^SJdb +Js)3d1%N *\hZz$KdqY3*]:-ceU֟"±s`ƚ %zR5uXW. =M sB{TqUUqi9+/]YeuhO; ]/NTg` C*IYJvuҞR I8EFl8-ռT!uzsip?~=wOEu?}aXԏ\9c~ b Ṗv~A?2`: ԍ-aale1:g2(3㇁ynu r8v!D9C.P靪+(9b((9 .h~Q}FbB6HG CHg(%& oF  ,Ӭ^*OZ{F<},JOKx.톉ƨ3& tMz`9+!5% 7%ROĄ\4Id8p8 "{cvf0tÛmq`uqE7rwv)1_fW#7z {7{mr<1v\/?_Oh|[w!,tcWu(qWaHH: :!4^Qs3 jh#*)TT kLPlOWc*?;%Zףsc4. č u'e #p =-D%*kK'#TRFbIp%Z5{·_Fa=:>|Bx;Fp/귧S@V/N|z1,D#ZlӫQK%ݭ&@ UENsI3DEL;kS}jSqûܧ9/0 C9"Y.Qnf >O%D!9C.P"mAifߪ]$ԸeWتdls[Id&qQ<Ǚ#J9tBcp>X)4CehQwSQ*|TOZ YCyOTv+8g(-4C~QLː1b(&jRbw`1hg~x c4OU1}N4@&p??ZNu(ۜ#涾d? ٺQb`;rL|Q`%sa8=@ "W2 +V]. C`Y%c Y?%=(ieAQ38} 8O*95%ԉtDI !ښ'H[^VH"롎3'ZK=g6tJ,qBk\=ƤӬ<+johְUi^Lcz@5 M@5gŤ L#inȽ4z$TsׄP/$̨bI+ϩO.i炝 O8aϸ~ IN:]6!,|"}8c)x v1;~ALEƕia:ZpVXQ1TVp<|]<\02ΌBPtfH{Hfn[1m0<iQ@yGΐKK(7rYfk^Hqg S B0q*(h̳ NIO&2$wd 9&5up qNaQ%L3Gh)FļE#lX(zȪQ )'wTMK/C*3Zrruߘы׊7/O]|{ ]5LZ5n&n{^TOn:,,߮>Z/5%X V1^s1׫7hhF:;x>σvxD/8Ѐ 6m(*6ǯFA^K"?U#n|fo/"^OGw ׍ڝ @=*P-7|bS*kמΛjqBQhssd GauZ]g8nVb8n^?޹hZt65X@s8?p? MԢti"*4h]#L IyK,\E՞*> [jc]#<>N}n1(z9(3ėʚ`Hy Gߙg2ƒF+!LȦ}0s; :O;fTM՟u5?kTeh `B7X{@Ln Tn2|Mhfûпꎗ5T2i||vռ[%?"':囉{!KjեȚ|E+/^Eo$1uz|wuyb}/7?V%CmFά?54/w7e_rx!瘹$ZȳOC닶~Avym7:٧Rfo$񌀐I˜s װ ԫUT@!kW۪HY߷bӰo>(m(Г&n6G߼LI{mɆ:wVLK£d-ŻYGܰKk} ϓS~-İ+(.wܻ~sw qöoNLCiپOnq_l4.عqxm26_G\.tbr={j|~?уo'.c {ơ -Yھ>\0j-{ڃa;Gۊ4;8tFA,΢i_`<~hI";4H}}Lrz8z/niKmݽˡ5uu'L`{=%ܴfެ[op >6 ^E z, ිoW(:|Lć~jpxTu_̬.2KOl~}xf%ԇ@G- ;yY~M@7lLN/`1L71^CO@?sk2@Km7ΰ1Uo?G,4RcB0=uzgO{ a|>7 =~u'84=OP@7Ľ7]bG3 iG{Hxǁv1 Ip)j.X-w/#E|;7S`VZ:H8*CUd^d%*X.?Z`M۲(5=o}|{y.45~mm·aژ}l`c#V>6VAUb@?~\_f⏍"~GS[0 Wl@Ypxԕ\=~Ν9l0c,*1}R 3v %x-*H1lेs k?BA) v+ૄ3:ÿ9ހD"6fO3|Rnq;k?MN͹i@@U(-(-P&QW`F2y03=]Y8ue*|e/mM`!ҥ%p BqNYOekOl"uSTgԗCeKH]˥b`KJܥ]½1jGc%B8#FlPc ;PKR< 5k+ʼnBq2)3$yjUBg̀Vy}6r%̭KQDg J$}܈W- ȮuoY)09#8)8. v&V$ YA\5&1eAFc9t]=fb[ ۞eZkC+鈒g P4m&gf!W\g1qh;[6TL]J5Ug )d%| L+i=p0,2Pp;fX[92@.>>Ce//xŰU:".hPb5Yz(xZ6\r .X)$(ҳCX`T >C%O'eSC!>-GB9jPK^ Hؠvh(.YP |j ]ؙC],=HٔkJa t%X# 0īCKB"%f$ ;MD5V]^9E{XiCÁMeq(5/K%O1Nde 6 qJ#7xn;:S> $&vz$jڍ/uz>Os8,XTs1t%%Ls{^I_~n{TQ*E[[\dS|N',',6$j&J&ݖǗl0ËLߡXMd%e'6.M9L2nZFCv|:O~9;눱H1pcm3tꦱ"`n/"4-m**GPo6bL-D:[XCUJޭNe>ӶUVZ;pwbi7̡nR␺ L _24|ƻz &Ri;@RDh}"yARf+ rv3q)Pۺk3 AT^eV".]>n_f,v9=m|ƸtU\'<3&(ߩ\9\H; 3,ڥSRHXP)LZU\)RBf}NaU9N±ǦRX[3ZsYmClq Ej{8iVQv기s>=o LTz8ey45˘#F)*8$aFKUZT`+|I}a_oU1N6$;pQ|@Ҹn`oܞ>ܷ9ZV~6uN{LE\W'T_%S*]~Th_']$?ZY/[n[6G=v4aO*b*r2 VjgQM _=X^cOⱆqouM󎭗;ӳɧ;!jujY?XV,tyӳ#d"28ja Sƕ) /GOUIL2hQ+'&ڗ0kNM~ "sxp@Bsav5e'8JBj9HEBs+rW]zxw,zcuDj#s#YƕZ\zš+/ߟ5|wh{/(8=ifݳ׸zjh=ѬF XB2)ЁIiiF<$ JQΡARMcP0Z"]ϋwja{D3S8fql/Җq0J $JcC 9P 9B!DB,8r*W x}\_JaA V90&AdMO*FOWfpIl1`ܭ\x"UMo uuLAHx.&ϑ-I`Ŀalr:Z2FP8v |ld׌밆^\VDHwn7P9K%e'xīΰ_ k=Z~Rni/钀_s4Mu @W0SSQCDŽ&UDKC,oScd[5A/}[EF=k1D uw7R8/^5/`۷F5SmR~쎋I.tTcGi0fzG)g^,,)A><]gtlLq1n 9š5-]vnuL[kkO( 3NJK)m׿bh +!$69|Y !^i VvsˬsWt5w$hώkir`wn#=Q")f/e"ekZZ!lKRgߍ_00dFDVm"(׳4% N?ظ"Wȩ?I֐Xħϻi;zQ0B:$ކ`.#`zD?B tG'vYFG(=&=4nS4*cA_ wV1N`{.7)ln!zgv9w./, Ok8d: ;j25^:7U4ztIlUd^.Ly+iSmܠ~G^E4%QFvUT> ͷ'6c &+|/ NͅƆ2as|t˅X?}MtyPL$y8X-zu$]B\]Yʉf6,=hugIǃͰDa,k RWRDZoXVR W/TS`b[E"gmW$sY|*0x,:/献1RVP͵*QQ`ЃTW+0jOmnO/r߀x){*3x_8t߇Q? ,+ߚRQh?=:PD#.d@PojHK),H~a)*\H +,pGS>LEwDfBa7NEuOw{ g_\^qc6(e+ _2*'_=4{B?"bpMƅvh/]`$#!!'Š!M>"'ĶpH9 ;4*>NMҴb"He-,NJnA=a$,+%LuԎ]7L ?ZadjBtGG{>0r(,f?}Q6¦IlD~CP#N$&i3"Gd!(e=TAâlHt=K YR!FS1&eyFU!@xYS$,LM$lvٞ͐B]'8|vajOP0TqQ;'3|*$A9$X)]Kqja( UA%2iݚ,"ܿ}@CZus'.f STQ/=&T7>k: vM[9Bڠj\!L(S]Br%gٌ Z.Ǝcz8_!rlFE_Hlva}%'A4I=3N$6{ }X`[(B(:&`Di 8 _wA4^hg7}lTR@:iT6FV;b|Qܖo); Sc],璄ӴX=>x9B%y4ʖJ0 HL!#a &M`ƈ-E7Z|c4:vA&Ha!l;.?yޘx6BeR97y4bJ;êcGh%F4raꝴiJzEՎG.~6Lz4%އnn{- Sh1p&$լ.m+V{bYꕊ'b2ZLZsy\ 0Mf1WECkD:11\Ƭ܌Y-D֢Z+҂Dؚd]|IjysԘی]7wY-E+TƄcy;W:kfK#&Y$vcIzt صCd4YI@3dڹ6.Sq.WY_7!J9te~u{s^*W9 !7|7mպwiF2HÅ$L TlX 2!6ܵ0ϧyEq>RZF`p'TY3 ܁̸fz=x|X~\&2Ue߆Ugd;?֫C2g+̟vQ*䟆2!4 9 Xm OuL./ OFԡ"UO~cW_iε.ǥ(RɊ. DBA@Tge ĭ:'( d5{$8R<(,B+;ɇ@D) \,*^T#U7gt>}JJTRtbYY!RZgTRHJ͊)&81Ŝ_s{8B2~/$$Etm6/ZJRlE[)D)qEK70B e-7S/C4PQ$QU6ňbJ+o-Qsje9wf&ݹK靮G @![\!ԚvGwJ4SK95$އzPI1iڤاxޤL kF&ViO>~tݡƄ32+QZũD+QHEiƄPQ8'ZDN3X8v}eקof⋋m㓒gxzy l{ L` .dݨSȿO!3Ը~i{f۟7vwӼVH͝aFI+)%Z!Mn]ux剎DHF{B<8wЉernۑ$OӸǟ+ +H{ehTe'HV IWy8юEC؞V],66/=˰n:z K%Ēг\/.U+{n|/o0P~z d`T7@|04,.S (`-<·Nj2 Ze$EQ~OVsk^[n2[(̇FCiAlf<RQ%9jK=e)Uǟ&H馑M|?`b , S Fk) mU%TxH:?tJ ӚwE2pѮEDZyƖ+&WRKUuW+PLjwƷyj q!x=^ u~} >BKEX~:kEl1 sx&Ci81 ,x<$eƨD0H4ǣ1"MCI@ #9͊VC.bZ}>iKR壏>W?iwI}cVsNʞ(ǫ YR#ّ6ȔYIeE6T6v mdYZ"9myyp w8.GJh)gh#'N'ep /GnwPr9Ij<;UHsYv [vkL{\^ƲtgEYvvvs Һd6[]dUu:L%9fVw]_ʘ.'R̓.ըT{DE`ӠD==rL {y"qAT*Pc垞|J9&ϥ@4O'#M@d-^ !~r)ur<<\m$Jpv5A a"1R JEgAHO2 4'8P5*\Z8'&Cɫ滤k䵼j^)ۮƩky5!ֲb}˴J=w܁iAHv][=><V'Lp$h+4&g!JUS 9_W.>a#ܻExb}:+CdMz5H|J9&Ϲ=y IlHVV  ɧa }>;ڨP_1"`|81rL^0,2O>q}U]WZ:XAY^'CkM_{fFTdVRF( 'Ң5QE*vZQ5E8(_=QE2QK/:}]zJ/ FI{Mn/ ˶iEP^! I: nBz"$Dxf"3MkpRIg2!^@cZן̼kJwBHDä#QQLuE1pٰUEa,x.`()J1a7]90h/]N8+%w99\a߾~M踦\qᜠNdb 28@ZskET`3A$Z Hp;a~b:.$|.^T.ڥ23^ksM!,6LkUZ_\D:+e;~uBm?-@2P$D)k|Zz+em(aYc;`koa;GŅ@Qi{RG sbJ0% vQ #jń_"LB_YaOtWsn?,3Yn# x kb ;8 *. }49>Il1S9ã1Pg97@9&%M% 1""";1iyEKPr8*,IôsJƔ֖ X%iE 248) 8 nq lT?FEpc&]4zJ6DqEƅXwK+1);%(I$_-]I6A<!:$b`#*Wa"GwAPiY:x0 ,*xS; :$"#zg_FU6?S=QYa ?Ae-G(ǷD PEr+>{$Zj Cdʷ iͻ0D\''y(f7 @\QCi!}wo }eYr,/ m ѕFM=M`IB3 "x;\mcWbJ9_-RV)i ˤƩT)aO™n+5ڞNޅbdZ1 ,KID ).C(V,Ƅw8,` {u OS9QיtLqPz^5Zo[MPV@x'7Gwi<0E(&a=~ݫ~ 0nˣIP%>P}_9P%&Ria<:tr]K^*AAۥHaOhg;,w9a,SQI؜I_$PssFǗEѐ5anE1Q~=Wxa8?L緳`~8l15דh0u"拗Xx:e{6{~q_,Yak+x9 E'x]~s?^׹lz[uM\F_\M .,^t51:i8qz'5ydz3s~;y4?3,Ovau7m_I>X *E|- 0X"|gl^$&5oo?O"WjXmܭdz=wm R4ʫ{jaV.InD3FV-.\Mu_O.֥\gBF7v<D!š%'i 8=,,q#AE"! K_QR߂a'qv~2UV_G5;WP+ W a"V`סC1Ð~B(˕X>AM-OtN_ɔ7X`ѽގ쵯VǷ*YY}RsuS6cu{ŚD[i]WGC*Ik6 Jкu Eu>uQ_˔ެ[wRDCC*I[Y7d$b(ÎX˹Vk%U@02 GwӨIŭG<3$[1(o)I@B2$!^x眊N)Eb"]YoI+.+À_ mlc#OYk6IuLFU*m>ؐT_DFFdDFx 9PMEcu Rz)6y1sן PŭS֤p#,zQ>2wIq%[R,[Ll $!Q/}vkz/0E\LKb)IUCuH<.zٗ Jz[u9kJ(W $8EC ID*%7]_%70XrX{=Ҥ=?KeNثpO9.B11;ppmmcCtd~hA+ SM1VIR!#Y`hmtAYsj)GU7WwwRf|5xIg1Mߧ0k5/ h1Qh[CwX%oQ]_dA1M8Q^<%%Rn[οzc9'Yo<#D1< =TEw1#LVBW!Z27YGr+ se5lIRekNU}&)]WHhy zŃũBkرW:]͂=b"ýbv3tNƻ2PKa42jHwRzӡ]qÑDBUĵq_se[ۧ1$ [[r{l۞ u#t,T!n1rx1E p,T: W!} ]RT>Z5dǣs~>vr m/?绿.gSz(-"Re!B[gi) bQJ#A] Dp߯?q@[Y&w,ŭt~ wղ?uhSɹUH뤗]M1˩g T*SGJ֎"3`B%'ZIT4bHmFЈBؑId45B!l&Y4\P‚jh47V)ƍ ̚SʿqiΝ3|ɩYq8<)AKM"OH JEv{R(UHKK8xVmKeeŢL`d1#"(@`3 9@&e$/D0,_KBAMyiݩò(_8"<13bu0@w T$@WdE' U?[Ϡ#0"b%7k,!XgE3*plbȅ `*M1hJ9'7Kph@Dd7]<di:=kdf@кiLN)rx0Fi 3 9؇nגx_bIPb򙪐rXڬg wB`ʫ27$No'ϯӳ0] D:K?O75]v6s͓ŷ11xv J.{ﮖyZԹG|4wʉTV3ӖYj ƷgE{g_x"tǧ…%+K0ord:h OfUߔn{BVA딎EFq-݊+jpw΢EjFqnexF ,chY+A)MJ2qhӒӒBx3 7[P[nIdAqs<}l8IV< 22DY$v*6ZڇA㛋_ k{8eZeNc_,^0Yy$ "  #A4Dhq#9r[C-ϋC޷j,a*/d)r>IØLt )=szxގr.C &D"ڷj,^ */l4L3PKxaNg~szxOsR_|NUOimsFpMlz#΍h-Ck9@є5VK0I5@>DŽԦ|I @n\ >9=Z\zŵmz|ݳ,U+nns1@c WFJ Qa=8 I0 5??LWq_aD^s!$I@!){l-T*z*Jc˺X"(" <U$C@􄄐}#Ҹ :9=ìmZpt) (Jt12Ene͏٧z_K0}.;vijf#c}~fiX <B֮ӗٗ xZ!Iq:4"Y0Q6rlZcƠ[h[5Ϯԍ3yQiL=$It h]M.>G^F?YhLhb$Z7Pg*`^/tfE|UZDję(yeFφ7L^Zvv*'9vm UQEt+#>[f,3eũgκףu"/La&/FLm;U^ܑ~ugvC(# fgˬerbnS*4AYۘsvR)1i*,*L8(&4^30 *A&:Gl:osn@tl(tk bR0iǍ LK[e1% p,T,Yo7CE9s -ňĭ|[}\9/*O(M +%!%d|QNi9z% 3xaG„,fzTN) :T0w͂+:rŭDՙ8%6PJ1u}RIo2.G (Ofm=ESç+O?U;1 촞 +#7wWU\|TO1K~xg=Ű*)VybW 2 %9@/b {LZC փp4QtJmVqPts)< P`vO:Tʉ(mh^_U8Ym%1|T B+B:[}}.^^-]}r.;M8\lǒv[j1ǎK'/l=JmikOK/Y趨e}L>,9Iyjju5aCDi)t=yId;@6!m=ڬpK]ũ1Nf'䊝-^x7;,(C_'l&[)'!WJ "pnZ'\iCZGc(ChY̪G,z,#bQiOUƄ(pPaCʬLM#fZy$$@-PNͦ{,6hD5GϚ(MsCـwHCx?7I9UOƜj󚗸*%}iXU#U#&k^G O<u{k>ṵbܻsM׵<<>Zx߱Zkݓc uo~Ϳ͢1Ƽ5(#zx~4IwpÁuOj?a$5!)}:OB'};xћGk5VI|`jk u8OOCUUK&Åwod.[~m!)J閙c Kip+ԊĜaciG3 H( 2Y{qHYמ].M{oߴ&fk=eV2+bȏUc<z:a5(~= 4mO)V3{)3gƄ~0M"|?װɚ>0lWa"l~u8;gDC)8[)9S%t;hںW[/4WE x#<~`Cq ҭ)t;4mՇ[/4WExJ[qg H1O(:ayuRŒv:F/rR Ӹ`֔PM?e4+UrWS1Q$g8#1(jkmȲЧ3-v0@fL_6THʱg}OQ2 LR{ܺzH,5%ND֖ B$'|FzծDSG=c2Y ǣ %5ELDkgrlS[Z#ߚޣ\gATz#4Ӧ$  5 ڊa1HE+E,h<'E|0¸bF!,K(BA9'Ό^ 'x)3$& ,_|sbل^wl1W9D)C9o@$K] i2 9hfeF.$*9:_bR.hXchXo\۟&iXm;?Ra\v2%z=*)EâW$ `̆2F3L>eHT8Ø𠄑V#s:ijqImGL*[ t܀Š, ̻D>K*f{՞:i2iTV(CWg  Es+O{.6# uȹ-0Á OנaĸA bKBYVK;j!Ӽ3!IRrT9Rbg(*O<$N|fI B*VUt2)AL(Phרd%jVNxgkrF+Fƶu FJ_*ydiQs$ArD~o58,W#3M(\lR?((QI^e1g#Gk%Q>j}N4iw"=]22rsXo_.w\C,?n_4rշ 'w=yFqxwǭ-:/ayzh!ȑ՚)~?S;ṰȜHNZ!@*Ix7C]BsvF>fjLd ې,'%!!@XYKJ /Q"P{Xa5:6r4{1‚$hIХSր'J@kPn]HX;Fr$[Z@ )%X<喣#xC(׊ވD$YKi/ iޘ!s%N!pi)G)*z*6L=JW--4,xA S&R֚[-хH>N |dB&pt}@cЫI{d  q ,K> q-iF˄J@؊E9Ŵu :ZdYb Y ryjM jBF Q<<(R8ARwV¡ȰV@ C]Sڄ ?RTQ!Ch(I2y::K*zbvdȿaX Q=R |\ɠd6YQh1' >:&& ʁFϟ:]_x<٤=xcr 7/ʽe!3.^) |h|:1EQw:A,_̦ː\kWm/50\7W@}&#.}ip*v0 <.[G4&n/(:8z[qz/Q%0H BHv@h teh,E؁K. }nw Xa* @:q :6JA h3כw-tBl ȱs_kzd\8$8~&'.p81K6%DNr;!@Q!M&G<گ#۷`wve"u%.b`h%\5}5p w/MSzk [kh9@/C: R{$QJ$Ne!kڗ%]QY?L ~u`ޔ憘8B%ws*Q"b:ƻ|ZuӓuZub m HAFQtueg󝣙QS_c@`j[5r^5X%gԫ:K['A~j'); ȕ. @vO2Jɤ *3>ŃӡO=峡mw~Щ-6).Cfת>]vZnЎB *N#e8tyI ] ?UĪwL!`b1|-W]F|)N "جVT7&+YS/ޔГX=TUr,Uo:ey`hxD&TǤ'wzjZ!Jh;3_IY?3Z@H5ræ]Dٛͻ>>\|>|Djιim֨veBwV;3mTipi{֩Pj&;AW(f۹1V_z;2 $I_i1եT9fY4M=OEr;eKIâ?!BNJsYK3$B^LA[hѐM|RCHe-u m<@jAд1p]G>$RIo5/Xta8ܬwG4$38^S"jӦ=^-f# Z gŤ7-Kli9l#Jm.eFaԯW'݃UF'fC jm2:E R2VfF٤5Iʹm}T 3cfQ薧SJJg1i5ޕi6+|ӽ7֭_]ع@Xެ)n]nJ.֫[>.pPKM{cs=Ta _zuk0igoS8)mݳɭwi}ysnq$7/8u( )1g/~\-I%ћh|~wﺸ<1jB( Rgyi)+NH Or'/>r|AyI;wn=?C;(2vۙ7o}ެ`LE/AV|gmPNh|N^!9SU!SN#s/_Uq'1=$οO4G3T= Z(|LǴr~coQ)Uo4\i,҂s%w\˯ {53_}uT,w ~~[>wu_ɬ? EL>-*etpLb I'4\2!D) VTG:'ט2Caܾp߆X\-e"4d;5><`?iU) TEM͢H9ExІ譡uog똞&0O@zGQuc~E&4D65J}a{|we%̠] 2t"OhǀSrq勳8|ޓ߰w4_qְGv{N8'- q46֢PJ/ȳʜrs Q{#2b85Ƿ"MY^Λ뗗9{ |o()[{}E>#osit vgroR6i˫ɮAg"gУn./+~.T韨PI8QgK]Us8E/U>~82K g][f³cCCw|v<]"N(}]xoޡrִ.̏T:5'{Ѕo} ?=$urZǹk+lD1@? :`gS/]38v֗m,/u츏vrꜲܱNYU" uAZ։GN꩎,Uju#zZnH_nQ4$;/Ř)en=8>Wڅi8]^'9= r16g秿~ڼv3ȧ/O^zxRۡl;}h=SFeFkZktc\'w.xaҦ>O.v2һ(r6""9߳|7CoNy|*ˆLJ⻋zz?|ߌѕd'NTr_||]?<|OS<Ж ģMX{5wwG0>klVDco|ݫvU?"و/^.Jݷx i5h y=pFΣ;,k dZ *Q7/Gm:w MXS\^%ǝ׬ёƗi%QRE.uhq\/f>^ !9XEM8uR^;=wYm6\mk8/'_Q雽<9S77ϝe>8*Rw7f'A2U?ٻ1{^/'lxITr5wˀz. J\Z ˥"o?MN߃Y󾞏Tyv!|])gp[RLgcݶ{uևں-l8Ѻm!_9Ep*yun@٢u[*)Yu6Fں-?F0Ѻm!_9EW85q}jiozwUܸ;=ǿUýl榨߼xK<}ut^]=U(LɿzJ u@;o iϡ>>gpKv}QmsVQ%'L ߷ϣ>XA͆]xff)i,NsVQg>E"Nc)>; Z,'&'A8ߔV9"'o4odS\Z_c a$Z#/ 5b΂Q"t3J+^?Z"Ph5CRcШeI,Sh%T>:ṑ Rby1@EZ4j}f-و+kg"EU 3eZg9J?RU`!&NJmȵ:t}[q@CR1;]k)U 1&'vW@ 3:J,G 1r:Sfr6Qf&!i1\:F4YZHd7hԎ0)i)R¥jGEVQۉV3Gcu^@-03pT(%f8TH*IiZ$q]<uH ٙ@ѩWZ;JMuԔ1VV4lfY65*BrVM4m얻SCkTAֶzZ.uVsHIEPPעNN-bU'>WD 8r }i G^/&\UلjjY'C,ȭB]:EBt._&fFp'|ϡ>XAjd}ıEv;7ϡ>XERYYxKgލP?*cҧRy fޮw"yit7[dnlD-LN 'aNF_bvAX/Lu047"%g$ A}0h)1L͂ꘃbqUdj .̴{6&Ւ ЩIAZWF+]p{zB.5.ēv1GM:3r1FJ5SBsk򶙜zk%s!7l"#\`&AC)X%S1rMͫXZRtLXJd[kZ3s6IZ 5T k\4WUUWnw-&hfΆp-F$@Cq6\(! fhx#0C085|[Ⅺuv^$GQ{[VJ,e^;"ę Cu$l63ŒQ\`iC %Ro9b„) P{L^TWDiUh$ AJ RH- uvn~q\lV_N]PWcԨ@UiWZKIh=GNi~'{=y~;z1`uX:&,},%R!=w5j,U@YtY f.ǡR~*ju|fSd)47;Ri,DFg>ENvl=;9lH٤[fgч\b?ù*tCr." P{ B`]zToDflJ r*)X84njdMVgq*ODI5hۗiȒG/YpKҷFhK$жJ=ťse[n0jhmjn~AUW 58MVZq6'5NC73SF~؊drMQ&屒$Sa$>; ZJvN:@QC~qFLՖзDU7(K犪9dPhj 4s4*ܚ vsex=+\RB7s1*HQ#47eN!XA1j|.>"qJ!,l` 7Ah](DwpK)!qI$H})nXa0sׯ|AՇU_$sN\},}  } /9|Kd$s-R5oyR?#yŅjCY1+h> Q>r,Fg R7vjOT}QlOvcә_mOU2/Qog'||i>yޙ}|8hX!!gw41_B+$>m:7nXk@ٍsX:Vٖ>ek3p!`Zel q+$-M)֭h/փ]6&fFQ O.jR |bj4`ؖաNYo+)PtJn+4~K8ްoVY.}Q[0H" C5"wbXɬVb ^9785zs7/4B|1S_58'_9 SeHqjm  ’^4r'KЏ#:4~ כ~Խ~0]7.7D?kq?U_{ ŀŻ8,AP_Qpa1?>&;;g_V}.+re۷?~_C u o[-*,RJm*e.V jmK%e7@MvA ucտC+)eƫSw5%/6_d~=s_}?}Fld[aM?S==lvHˊNR2SCy ٿ49EM R+Q)OP>6ix*Z^lyD]}\]3jMRZ_R ;+Zdz [}ͳTC-5Pr<=(=|c L}{pnFs".Fgw(Hđ/ ]&^2Ѓ$u:8>Tӏ$ ` vb~"C\NZSZd`Zk> xb3ЧB5Iz@_SCjM-xBkғdJɋd螦 o[LRn{3,ޣq*ը{I.#v9=\Ӫ"=s)V5r|~oټ8:sQO6GlV++e +b _6%/}Mq|y3l/;^l߸v:8IEw(O[-=2I9OXEdƑj|j}qkY#v07ݾ!-g͔>NIɈ!oLE|T|藕wz=#@Ln(-ՔzbCTPKz6Ϋ@+yhSc i-G!Fʱo2O-AalvUrFXezc%Xӯ{#L#kqkewbԛ' (8Z =;B.by'#{Eb$"}XlLynOceB/R_J~~~MmE㻣x^y|ZO4=)Cqz\\Fs_QPzmዯO^dᓾ=ß~8ЏӠ^7ޏ?|BMb^=}zo@CbQ<;ӕٿԤ~wgmnH.!5aVRSIOra[eJT@dV55=/ kD䙮hz0wc >Oy/a<|&z(J#9. B:&>E 4/Ak]@Q_Mu(IE%:2k2 N.zA2#AFj)ۥIOj 0 'bIMB~pX H@@B$'$kPjG{S4|Bg#y*lQ罖^grc/Ϧ}3$-5rjJk-7~uA_}tdK8hkVʾ1d-wZ!jOusYcq`OXfr_[tB6m $ʛ1elpQGyҁ ﮖ >7&]j^Uwi:M//.O D"J*ۗI(M>z2UKsX-S-piiez2=NW/NJ!3-ḩPv$3YyzUp*>`_]2{=|Wo 7+˽cYl-w_=4Y~+߃Aܤ%%l-~r mGjC\޹X S]S]j)Vh`+j¦wi3 /z2]D8I9c24wo}oЖALS-ܽߝ 4eR{N?9 #.U:ܡ&Ğ3RjP;1<_]]ah7Z> ef5Z+ؑ 5e}mYF[| Kiӡ])CMcw6ԚG(wDu9bv) ti)Ͽ%4ϻ/'@Eԙ\|e\#>_^5y~AFl|0n#C /gn0].?㏋Ž9F?|VENo`a+ f nQhUL/W͉-e=#1~{H4*|J=ŚG$z%x,e1Rќ$( V>yn,̰֔y.Q/:/zDf7go[i /VӳZDƨTy Rn^?R/˞xTkꃴL-%IflEf?}:ޜ#iTu>yQ^bmwp$vYRKm%GX3 ,)]klLDDc(T44XU(u]"K0BdlchS0*:5iD[JnǶ'-բAHٗ2g(gydLXLoc+c$xHv?hR$3" +9P~,S?cr_O3 Tjc~QD~qv*,ҩXeȂ(ۛhYr Ň2^YTd_ }Q>YU\?J-&,I2bHK5(TQlJHdjuW*( yK^Pt j8s۹/Txy+qrk=X,cփ_8(הڀXUǺ՘"ZG살&*E^X)2{ΙD$'jKKh{ .|s "̡hAX "N,FLgFfRe&Kc"cY gZCwhKiKƚjT2]V<[i zt WY-uݰ>̆71$Du׊ RgɶI0Opoj'̣Z ZjPzôVs zX;'hVM]sUoOX*0x" `N7Ġ(XFSJz^Έ#$ۃOu#~Py؄͝?^9ͷŞƈqP{D\@3MZD"ǾiJ<fI͈ikGIHsgJBP6ja2IA%s*e9qƜ@R: _Ǹ~!XJ%P@dJ*N,Y Ν,d!.9c(漥r+ƙK?&Rƙ` MzRڒA" ʸ@%kp!.% s^\,CwIMmsMm2";e $j5$YVy|eJI9<}!<"E{ǨsFS12y2#N:>밚R E]aRdKgT́:H36 kE bCp`![I4|gɈV*4$iHY"Ct.#j"D"-W>RMD$PT{"ܡp4) p'q0X-,*%Z!gw RTiRD‰KkU)c;44cm !rZYN, g)CHJHRbh(,Q\ 33q;4T4Tc-eeHJ].k[t XK| KA2ͨeOy*2"R ]pRIPƈdQ〼4Za7Rrly7A_e"~o1y"az. GUOrÛ\8a2,^!LW3כ[ߔjH#GeoStlXeWopy=!c7xL⻕~zyzRI8>·K+BGy5ל 5}yonC5BM[_xUj3~ѨGd/hQ #6i4mdcÆ:,E#y9/_QZBi@a%牱 yhېKB|ޭ&a,A.궦SW΍+d:z >z:LѐQwտ=_MǐI]aq^HfEo-f9ASN;lx>XwUyg}坏~sΚҸ4"\c]`Rc]P%X녙:u&N/KwD)Z> 'Z}0`4FRCo[]{ݰQmgnAmWԝG=4@Gos%^ xwc1|_/~Qsިc`. ֓\P3Tnw{zjLNOyrb|1piu>7n6CJWMKR7.'i}eEG]hcȿ}c/KtW'OLЏffn^9#7qσC`sS7ܢ"R֒|"%SmvӥaJGn4wnGOM'Tj6$+(*]ǹ݈(5$=vADv;)Hnѥ/ڭ EtKpL1JrǼ:9V8 EL"~6+ l y%ͤM8٧Fsd* gwU{pZɆdC^y1<[-;H=-Q-p^\o Dt O\{j0尲nᶓnWi0n͹j?z&|G?ـߢyV/[SZD<Cw#_Gc$ ~_JA=vQp{#DDC!xCrJ;HŠ RyFLI_fyb e1[!Xb7V5/5j?+稰Y`ApBgT2&9Ԝ I dN@JzjKR4S J$)&>k)%4LJ #ᛄ>ӳRJURMiHSP_oCM;-]SJy_sobuүEJdRv8H73k&[ JߔsRjl(T(첄b\zD1kG QRرPŔR& [u Q*+ꭧ9iNG-p}\sXx d3*԰4H*e0TckKwB^:E>5;ĦC3Y'B  "\?{Wɍ( d?0}5y0arw:Ud^>"7%yFܝkPvEjg^爼’vg$C,cjֳ ,PH ryl/SS['DԮtԢT5X4J`ō񡠔%J [#lZ:v=XΜ rR]_l0̂P$EɲdP0RP׶$jeeж j%8Y-bC9(RyO<@ao;՟ڸzOAoݏmfba5KZV\%jXֻUX! 8em {miHH`t)#`ב"+x)Tٲc['| lZ S6ؚt%\)*%X.tFUBxCQȲ\0Hj/IB;'LMPx/M#3ZsÄskTla0LO ~U΃p) #NaB1cs]ĠrRUk@ܥI6T߰V+j<'KvxQat+<˾ژgmAdM#y|bܟ"YK/UƲID3"" UoԲѲ1LKP6q-u+ n%Ce)l*$=-"F, :I w4KP7#2ܖ6reYTUJJk%m;8{fu):C(DBƌm0ᴮysuXZe,ն9e[MU׶3 avB Vǘ[Aijn}JWn0c*~Ytվ#V-(j)F*l\:G') Cᶰ _t+RE a>ӄ07PDfKPHt+(!>SOwF3)Y?(!̍'waG{_1[c\ȰfSް9$pcki+3JO M7k ZKMLu%&^ًԅT?fT.N cv}պ7 JdޢLWy&difXm4ӈNdW *6\rmG[A+٣HQZn|@@gSʟE*s!gY1b#nvc5d\ bB';Rۂ,SgR6Fyfy3ΝUVǂ-*B#̗/gDf1"Ct+82:i#['|;)Qj([U5䨪j?(OVE1@ WQ!ssP]g[p`8~cb)J@ 恵uU2Tʒs$ԍ* ~SI~۪mV:*m,CyCJn08Yo-2K7w#jܲ1)d6a%w=q).LuEɘMSI.,œ<Ǔmyp6O߶tk$$lk~FPQ ץ_~{+ώy۔-.TK|Z|{q pF901 dV℅ռZz^RXoF&y`Y/Oyٿg0 eH2H$N{B-u0L2&NgHN QkQQ*SJ(ݖ?H6RmM`&͑P; ,P}* 纾=_?+m|^9Կ_@lvTF.!,u<03rf<'m@f9A X"ʉ.@ JVvFΣS,OR88g98U$xxwT!6[ڷ4"Gd0\`9ng9}#:raM|C[-]ly af 326g~wp~i  Y.H'W]w/6 #'}kZӰw,}Gi7?>x|7k^BEB_0I _~܀7l2<|j u U7.b߿x#>|2:t)=o_]뷣OSkǁtgF s7my49zϻP3 C[EN_I{c=;8^X7B ua dXj_"|bڲh`[A6Jn!:%hk*UIG * 썎 ZI+2h_\k+uqKo-1QSJD^R0egqvJv%\2 {\һt//KqazhK4H~`0-p|[i̭W kjL}\i \3!J0_~z 91Sl`H 1b9 4[[KʦkkCΨX7VJh+)Ks"PHPd lG]Kϫ䰒JՌRiieYZ_M:_cHﴩC#/ee KDe) 4Sr r 8 `lbOqHM!B7s ('Rդ0_=Oډ3P5l)  a|b7Op}XX- i M>d~l|Kptn`U ᱧ%u4 4L1;)+5̧/v<`}".=<,.H}$Oo墰5 <ͣW22.g J ~K&g8qgjPb6jV>u3lT~ruN7xoI.[Hf>u%ZٽߌQΌ~ORОwe`.PATk):U@i!<{UQ3sz=2P衐3Vg*rjx~}Ε,5x @PRyC˜R:16qCdQ6F4U 3\e'qR $"EIBN_W$E |螻-k艋N$->$14`m Tg$ Sn dJF0$5lIQDJQ)!Ԕ#bd);{7%.~v#C.nj=z¤/gi! !jk&S{:]ZR-BSMT][=x?UNQUlw6s, \i7[g3[TYܝ5d!Dkٔ_I[yyFVW1SUӍdSz)ޭ y&Z˦JX]8(;ՔAXb3"0^V#6vkB^nFhNkagHd@1(ӝȐHdj>C29"1`Tg!dGF޺ MUa6ͼ|@V6ppɁQp.UՈ j⠵d'[u9RabZ$9ZUWPJ,Rmumsɩ j,25JbCĥi6=PTdW:~/4NxtQ;7-H# a{]^3\i(J_7:kҪY6D;o^pm?T<|g`jxldy]#n`G()3zɴ>c3+\Ǽe'9܂hٞ\p6~j+c¶igVFip;^&9S嶶 ~A5` {ؔ'-9!4RbRX=^3IKe=]SR%TZC  Zk|-۫^mgCU*KS;7̛4&25(XsZcK5g4Dk1ũ"["ТAf+Js .d2,өa2w(LfD]\f0!aXdɶlYl؏|+UgEɐW/ąգ:Up:d3l(26ʩΰȩ%uu?tS+j\^-yؒ X{@BǛ4PtMF@B5׀bOѣj\uڊ'*&( ^tU$T6jRQ8.h++u^#<مS%&{b(-E HA#sKN$A-r&NSFiòPȕ,S-jG9ܘp%вH?}xMmA-/\xiH>_LeS{sYzcANϪu˙Lz7~bp e~+PSe'vHOP.]<0T\x3JBKh_[t6gﱙ^m0[6{KyEwn]kf؆jʺP'G*?(8_ -'ߴC.`JJ- vg|Xx 'uGI ?{Ihǩ~7+[H_A Fgvsݍ&< /7L5{ܼ`-c :݃gNfoHFJRH5B()$щ_6GټGt%@7k>ʤ*H }n**Q.]>6WjPf{=yb9 \6oX,>KRʼne+.ӘxKR.Y{CI$2or piHpuT/FJԉĸ)1Lj-#R7˽íIbf…Sqs.3%s7uիk.X{ꚍ q 0Q&wz#BG~FRx' !i+d|$R^>P +y5$~P2<8!>,am_¯+>EF2O+1<Γ։$D7աx3DQ9f;FڐpG Z[Ɏi}T2;Bhc59#cn:HN7rpcJczr6DB8D0բk̼Tqj@V~#ǻ 6]^|LVγݺ@2XFf0 yǪ /ɫXu /՛a` 7%4,|TK/՛ZɰW\ᕯ+4zt4XC sN,V0vVo2u:#Vlܤ^ϋƠEX;f8㊸qU\2~=j~E'1Y|_=}/u}EIYŋŷ<%IL+7 Šgȶ$a D藹c)-}ibmXbHfa/B tEFgVdSPWFWVU8jo1 /|%TP?Umy&M?=αǕV<$i2[sH L_,W?עLo<ϦUze%(VWd {"E2O%:NnjBcJ<7DjB%5 5M%#Q!a9+6"nk P*Wĕ"eclQz.9>&Qb^:7v/n~_0Uڏ?=fm~jcXh\o{'俯q1C3O<.#/tOa1tyqA?zVs6Oh oBcPh*#ިi)ƃ_uK;ru&Xe3/JDx+oNuFq51Ȍv"AG%#2TcRsa JیDݵ3}pS,cnIV%l%"̟_eXO 4ؾxn_cyw!Z9hL"$P:YIzH|"S+YdE'%`MFc|RQZ[?u2vJp_"\,]ĐFbu{mlʫQA(t7C%[@!OdEXƴkj]B؃OyTmTrfcIoZu{)Jk$H96IfPzQٲZt_>I6%Cۚ&AT#i<' HV9yH;Y~J:}ֺЍy;6ڒ(Q_m(, i7SдZYmA+D,Dx-fUB +ў 2om {-gʡT(>=а?UmF뉏-\JROH;N@K5ΑH!̄ C&"@ #lAw* / 8hCC> )Px}Y 'Oݗu!LT D$p_j$ BqNML-p*ZĆwT{f1:ʲ(rwNmCbI6_FЅe󣐗eO 15ա-%UJEo:"T9g|Fl@C} :5@u`Cb.Ҝ̄U;XjE$5CؠmF,| ˖ڻgᡸp{WkRK) T)J n8lh.fp] CSGFy7Zd(zR@9m+شzޭ,ѻu7L!8׽QxR@9m] ͻ_Hn] CtS2S2huOϴ])oHiy5ZgD\%Vؙ[r}E-!vC[? g|ҷ[֫5YügbսVXH,6uy("4I&9|WC`d4d3`QyvG0XwuCNZ*ݥ__NJ1o-~%f1rPL8çj%x-3\wЫiU˛TB.1v G{\J$mƗa,'@d}T9'h #g{";;9Jr !)h߼X#F ĀnZ5ש\k,x牱4x"3/J {ť 3ڡ  C_] dZ7K!l W[Jl±G%6c]ne[Ph=]ԯw<^Wq+1y ay>w}noOƚO{e>XR߆O<>. ؋f 䈈,;xI%DG4$roS$ TP} NF DZYCDMjpҠNQ+* c K?Uڒ AhX; uPa8 fDdr(%ժT3JOi(E KPVK1gJ%IGh(JwK}*5 3JORJVRFPHM R4B;R46RS]Ri(1£H`(嘆FjQz(ĸC'4;cbb\M4Myt(e$ky.T1ĥZZRQ l~^(P Ri(fFRi(fotQz(MnVG]R_JM%g0J!"4'1DA╤TQJ1 ɐ*FRi(mVR3JOKU1$J!їΥ(=iDAS]$48PVJ.+i4@\QTjEjI,2J[9!1zs< ofiKu?gc'ip][]yK;kuKJӉo n!XrL4 Cf[mx֦7mLo|VfǒTm_+=϶lFuXzǾebE۝sg39] ?{8,aQ|8sA˹ع8O%PKٜFƷg}N"'HP)i϶VBU=\)5 %\BYUj=L4Tə&Rێnj8R7y,+9:n6q/QD@%%'ͤVhr*P*K!ދj|QȢ(U[TH>_dbkG-;%"OӫXQHī>/N% Jz]p*Up1ЀP+F_`s酢~ԫNYTͬ҅Q)}iQi,n.lN+N?/;qu([L$l}Mҫjn!&x< 롨sWfOy\Lx sDtZR-im܇Ch1c -ď@]Ǻ.Xt/]x|_嵴3K`ʹя}&ҕ ڕ\VuW B#$(1Fdmtvg{\%;7P2U eDtwxG|Ur擄;j)EɎGBp5JKmZVJ0%%YO(M,gDJ]ԃ ?F=O$T%?[h6#rFlC> 1:|~㹛?i%&xvg=~wۋŗT׺a}mgL\Ϩ֌ cƅ;bnv* 2wI"vŕYa/:Y02,jvv47x h~4iumdm0 n4J`3ʗ"h2FK!X=! ;ThHPiZ1c<fmiXjE XAfRRWQL 2X{lUs,|)3j?ל7uOM 7vL,oϧקeG&q9l u: ~fZ 9(-CFsRHֶQe F2k6gBw4o:0 ?q 90s]yF  ]gkCMe`'uy騇~|Sf\u4n ݷ*hCkh{G$^;;cmmƇOl=; E:`V/ǙiZ82 Av],/kJEd iqC6fF;}!Tzv^sQ‰b5^%$^- oX5;{إU&ݭYF=|z{˨'M+ Bv >K nWcg%"T4;ȸMkѾ("]B>(qW)8'n#ގi?M5F)ɋˏNρYAݏr4j3>h!Wk6 xOINz?OI П6hSQLOI\S?ÈRk~}b8!NA`275,(̌{b=T^P`c 0CgL%d8Waq0l;0Z$'E6S gysS2 'C_&*&nk 4TKJ <ۡZG@hiG=oq7Tח$m9nn2z{S\M^%hKwy}wЮZ[!M\ ~<9v'w(: k?_~&|1*-c(_{몉Z'𒑛so}ou[-j?E+'ZKE4KNpcnĈN;Rۀ֗d޴[~Hֆr͒)&Goqj7[.16)IaznU"[EtS QJCA㮞 ao(>%QᆫG~jz|ZF-Q(w~~#RҖ_P:jkf\Y&܌K6(9|}hnnqsxW\Q?|Y,nN߽LGO2*65Ȧ;Ey3|b)W\<X x < 7:hq- {)=ۦTZdys^+<^h\-<ᓍϖ d /+,g_>.Ն&_};;\ 4]@ͤl%BhBZZ TSx.SzԬ¡VMRJi>=,Qj4,ii`J;+EɆV`mYK@}Rm^ ,is _V ڐ{ȔG4S;}dfh+R﫛]v} [" TNri?]}wBWԘ0Tj*T:ʢ(#;s0cXϺxɳ-%ږ*diQKaΞe2-9L(:ZXzu멛^͉AϺW4Wj4^6Ra>uoZ(F;hC`&T(4A;X v+'BctIKbJZRJ(5U,D(E_ UAjH Z4jYn*`t:ZkJRo*`ev s?-b8>1Lˆr[t$:1B %C@ S߮p\-JF(ǽBqS@G1nsD1\0j>#.ԫ#9f4yG_5<~gDڙr8Et'Yh - xJz+`u*mZP+, \%d&a#ץ8ӊU ,$וSUR UR+FHeZQ䔐υT*DwjM3mӊ:rڍRBD>R {{آlyn2SDpLi>]ZH+eOytԴ뤍)HbZ;bn'ͧL2ieo`-P_(M+;SA5]E AkwlANyMlѸ6e=bsAQW b£Gd<_%!=B~Q8L\k>aSF ӱ~FT #<)E0 @O.ӆji]r7&15JO,ҐRpI D2ͅ3* . h)*S8&$ȸy(2ZW J I2/!@)7GxE/wPd('9^(46WK MpkGBg &^Z(.4FYA@ޕ5q#]LpdPgcb+eI:PW$e+6/P$7Ш&aϐjT"CėduX2pMHEɥ,튤A`+Z@-„;^dXLT^b(*Ŀuc1C  *0JR*ueLέüIs=JeSe5z]G=:3 Pm(fO))RlZMPZ0EHMN^ {1^Q2b{xQ |Wm|B9?}t$ӧ3?}!/>[wt5E\86zLg;N\G9nNO(|RcZ[4<dURr^9/{@vbvCpcvuWq>},Vrӵ#G(u~F՞ye>#oa3 v,`FzmR.I(=F2R'DhjHtmR.I- C9Q $ @G%Jt&z@QiJ:{wKntYjbX3Jy ٠{Zۥ>]֥ǍRPJC)R ntYjFiXRzRPꤖDǍR0B}%JPV4GR )J1pƟI<"LJRtϹ_"]%%o*(=F2RfD(&ԒHb@10^0_ WǍRda(^YJqbOnjR ,s(EJK}$r{F)a(et@)a(Imr(< BOPyJkúQ 2 P ^dJgR+6cF,잫O<"P]%%A1DHa<.dץH{06}LjRPT}Ee/PTJY}+{OGRNPq ")۾m|k{Wv6)N%>AQ^B ŋq~tY</X @ɣ;#ϝymǗd f%(A{T'odƼW~܍˺LPgX5 zE2r։QªI$Hy(b:?N&l~b<tƈZƕY=J0gprVua9)꟣ԭՋ7:y[/l~nf׹5e=y[`2>"` qM1c5L򺰫:.p]lz}}Y6U4G6ݏ,iWd+JzG5%d5NM/W~b)r5fUFhO\mHEENLQoxCMٵ*Ȇ."{xogU{qƗ};0{<5{&мExl~݄Ya3H.GEZC[zJ-;ľ\Z}{7Eo6k]*д&=-%ݧEtճCm3zPAuf Ťw: MF^mm#.!%o#?G?jjnԇn>ףS 쫻񫥩wo{ L"ݭE8}~d58/*ő\BZ UHY , ˲,r]EQ[),EEW49P4kj,q5^&i:GJ &(5EigV*Ӻe&! -IL7V~fU"OoGM`~;󘏛=n} Sl^tuٛf#~X`;Q/&jBYE3+?ޘbfG{fZ {WW˯<}?;}'>f]{1Ѓ{G<д[ޚP]Ms-fvj޺v\f=~mُ> j JC&Fg=z1G4릑݇XlL_fnfK,掜"tGt- TN@JukbLV/@Txt^AҡDrR W{j(e̱_#"yɱOP '7ctE<)<2%nHRԐ1ܙ=Lz!ы%;Ofr]2k(9崋d QBJh3CxjL*%XX:0+XN .ZQYX@9k}Z P^}"r!4`!& m!!7!ښ!¦ S9ң~!] ~u?I}IbDң%/vu"^aңsfi?¥9Jt]| W 425p)*Clc;\c1't&HT"oFW:3Ei(4+,8p/z(W  *,4qɂPJt(KQjV%1Y˼BQH԰%6J HMp D0!(LΟEȮ!P?r^naz}loGJ[K`%C T1P O&Bn+ ;]*;?vUuv;.鲰6z$.BCԲΛJ3A܏ {s*D=CKcL0ͺޤ˫YJyAˬd@3QƼ@,4Fs#MYR$H(-nWT5YJѡx<& * dD<ׄe2U ȳ3]֥) Dtynu[10qbeѮF, XQwMI&w)QЊR2Ba.2cT(}1Y0: zQVUŴ4M2C( #3֭X"IM`RS( ?~͑\s~>p#}^ǝ:̟徊ߪ?4dZC?{ 3nҥ\01&t/J?O=>ez{77ȇuMU(ۣ7z "OmiOL f!pf7M\#?]UՕmk}s֓!ֵiςS䒝/+w]sWp/-#ysNCDi// e~65dh}c'"2y~AGKʊ%Cq9*SU[w6ΆavyHY)&L_>j;u:D,(c T(ݷ˝Y=X9lyF"m#YPe)K`'A8CY ~)&QڣMz`؄ڥ Ů7 ΅Xa-Psɨ\oNFq}+ D6; //BbZhm?F"B!i9"YETEE,8dVT-diŤA̡4){,*(_hQΈP6CMR+!h T^"'FK* TQI/Oyňc[cV}#U.!.K X07 "HSΰی?R< T !eԒ(s(/)J^\N=1`rJ QsɉLfQu (dCmFTP=1 4Ao|z6"@z{T{`ӭ[qP}FYTeN C H#[F#( bB'T>CWj(JUyV5&B\ QIbs Q3VbMz 6]:t(>6}'Rޖ;+IR+߃wС^lc_RX/$~h݋:#fѿrA\!'z60Ft"heB/_8Kr=(vhKQ/ջ('L@7}/vWGZ){_=K,>i Xfuy9uzSnIgoFs}l4(gWs]=r=k6Gvx{ןԣ ]f|OSsw?ukC.9u`o%R q:yB h4:x7FwKAtR:\+ּ[@B^8D0euŹ݄OءwKAtRݺEkyt@B^8D,\n֣4H$Yi q)yaJUOz rbWcVW͐4$^M77͗g7:@oteߜM_]ޕ6r$Be !@ X6;/3+U$v}#T<Ŭ*MnD/3"23/#Irx/딘_4S:///) T8bbeWt kD;Ұ <1!v/u㹷CzAxЎm PDPcM=|s7΁\yH!V8DhƇϝWHafVuuϝfk餋{nĖLϽN];h)2mekbGZM)v5Guz]ÑGFs!1"s{sSd}3A{  1d{(!r h&fxw J-RR!P'ƮK-$m}4S-kT+'q3݊ԲOL s3(v|oZ8U;Ǝ0BN|f).ޒh7JAh\ RN3hy)Ŏ8KnmYc8W&ARA3eqA/v< pjfL #ćҘJ rS -4kCAUbUuRjc?imf:Xܬ.%I5fܛ7y^L(iI?Ykk"F5{sfE2~= 3>U'{zUJTҾ( "ӗcJLzdWXZW()ȳEJ( PNIҚ#%-*.;Y3C91{'*-ჱ'Тk`QZy Fk* kf^^pc6"D}Qq', ,,,A8>NNq@ ,Uea J8*ZylH"~MhY~q=6UuzPh9qD "ӷ CyN$0QGx]OM+iD>q$etәlPB$%{]h ˫$e,qft [A(XĨ#l=/%O$"0xݔ,;JbJpoedCgf67j F\;SA:ϐ 3̅D[%bȆkZ˛y:h PZhocP/V[oX5*3"pq-\cP.! r !7@Dbh/.3 Pb/ 綤+`VJ])<CR3+P]`%fQ)٬m*ቖX6XQ;4X|QKjDb5dB$MRM}4|?J!(ʜE)yrx\U^Fb,<]uFKeU[UoUO8 Žl]|Iɷi%pTD J+Bפ2`\P%놚rUk+Գk&׸Irqn! ,nTp׽;9bN{ڹC0yx%@fkƣvTBzi4=/gtqO%*]$];hd#F[H{\sܤ\S}$*Gث (㇦37;)>ڌ?&$+>ـL;jǁ ;^fgH- RG֒R#KRD@b7v0^7K) j iV}[;hҙBZ֎qAMuQR1 kfp&U ,2̇hч1ŧϓϵW @O#ur|&j,8C烗XiOǧOa Ftxuy ⫨/ggd=6}Iп$WʵxAW7xwpkz}F{{} mԼM`)EpԡPh΂)E+'!k<at;Քf.f ӸI_^O~rg& `9&c1]He|,#"fN_ 4c3s{~UJH-b_lmJ | k(`fƤpyKf%1M3W3+ɬ:blC@wUzl0ΰDm8z ^;x56u]laP!1VޗKK ~kIBߛϯ"[ʃزk,:nv6Bb(瀽daTP`cO ˴0xq۵"V5-r7H[~CܙA5|o67 8bAsvy(nGep_m([0ovx0&L P?o9j]ߺ)f/l׹Xy*zyG;ȧo%pFby1byB0r"1bT[{goO}>?8 bP{˔0e`ҽ_I3)OY[q12'K?O?O$7<PTNe T\]6inO‰qɸk QHr;>9; LPD<Jh}aŚ0b2짷I[go=_ ca w>"%oRˇn> zcž=FMܘMb` է"&ZD>4|[awILlx1QUҕ#""zT' |c"AWpf0[0YA\;FEzoZɨZ~dZ MرЅ⤫ _ZuىZE[Z{Ȁ]3p{ig v|r^yS{ w|^Zuڟ@c&Yˑg& yB ^A[ 2 0i9gӸmqzrzyչiU)ݍmς`e x5xBxw7ṇt0 cp8y A.܌kv"k~ =]w{s^f|bKH=%kop!8&pR\X Y/+lqEG2KN r8Y&p~y{g#R#O:Putr] #+K$cV sǁ2i(#M`)'%32xjICC{;NTvr^n{M+G?>xiZݏ&ì[xWrW%+(7i %km8hÃ(Q+݂r,JtЏz#lؕ&MsUH*#۔6K6hv~>ƞcLX%sθ&dwF/];Vkiq$ּ̆0k(y5 oYPrD~)HZIXw1J:/׫EƭŤV~KzQh ,bU% <5%hL1a #$F%R){L4c.d5볭U'5і$JMT{ {mn@ٖSut? 0="θ0*X{O'k_58ym\ً<UDlOҌna4}|^Ƙ8>^]^b"WJju1ǿ_\ qxE`3}pL|DMSQ<}_s@x1vʪcq^_<iol5=~uJ I9.;0 ζ,PTӛє]_\ؿ-01gjvB#,'墄BZwMnpS头0QRb-YihƂO}).P׌-v#OKZ.AYr$Fr/_-fvEnoxw5(fE\Wa_s?K.GfLY'e'\S"b=͑UhѹБr ΂3`mȒRBliF8**zEKc(F[ҋ@ '[d`9kP,`W:/ٹ' %☖+|bNEDj H8%K1 pwցϤKPf-9}eJo* 㫻4TMK\jk"ZrB`&>71q(*CQɥ4sU"p)ے):9qkM>=D{L&:~Vx _=OF.x/ϿMLH1Mߑw?>vlgޕ5q鿂nv8ֻbȒE:Is P674jt7@D̪2]1"@HS%o_?!g\6s1A#=S|L~aΩD2m/b_]Ij ̑W}w|a㆗bN)Z}|{Ub~|jdJQa:+ EoUT\[xXwlhsKH]bI3g[Y,D Awň15.\enz$+RwLp1[dW0{N藣L tr.!%4t8]N\v1^u w9^wLԑP.g)%Pͤm_B,Χc9Zm&mL'obmk\Nm1[Jq ~LBcN7]T?"R,䝧VFc$V4!QxnOsy]pIZi.ۥ +B&-n݅;W^z6*PDJ;n*N)wR oE?4"[󮊊R8` )٥4*:D(OE1})KyViH"VZ|MbϦ¸jk7oa;_-1h@d:E$Ie/,Rj|rT8Y*;4Vb<=ؔn/m WW潀3d^H+ך_E#ſx9!tG;0ٓW'~x)_?r-cQg,ndw/FTM>R{}~?iQ9=Mx6?Igyy9 ?o~:q-`al|9`ԗ1C5gN9s⤻/85hkJ 0[<_̘oE'Ax#e)rNfBAa4pq}d0Kh*ګ%!kzzr'&?l˔Rm?Lדfg`r)r~K+D0,x5vF(Zq 1%㆑8 V&@nqa-MgN1/ ;R ך_mx0wv2,偲Tg k1(`U jͬHJB ΦdnKz ^ic!]V)=bzr+zFI`c W932ztyt6}J[mW5Xm|{$=e.m3FZ V4g)Jd`(Y:Ͱatp%2%Cdh%8,L owN5Ǻ۾)! " <5RC4r+$ c6p0(PM4W':ҬC&)0}J5%O}ҤڣhX$U;2*#^_,11yt+0kȇ[?y~<9߿9t=;n8Ƣ[r&G/LFJNT+uXZ0]=0ErG 3/)´"]OKUNU֭aк5A4u;a2GtmO4׺!\EWtd 2Lxқ`~11a款q4336%ܗu#؋2MAS8j*:Čl{-7|}p>z/+)p*ꍼ]9oRQc%]]R%Gs|O~) [!x~S)KLiJ,eEb{YJ_-]nE&b( HJQ%Ty'$3#xحi,].%ѧsoW~H6W!i[hyP~4ml̕hHt3_|rW3@Pv EҤ> ѕniZsH]6FΌGۢ>j@3B@ڴ+2ؾ|~cA8% -uôVDڠCYs+ 9#mDn$1?dj_x ;0W@&#[Zu^VHME$|2LcNk c4DlH5k:,•ۊ&I9vf~42lC+AGV OXR[rER+?+z, E`yQ6mC3&طpq0ʅ!I|>1Xc OrQî,_Bm'qDĿuk  0mkTXOƵ_'XjD VFZGdTK^CDsPGĊYw0A*Utkkoَl:ݧ/a 5]nMDJȁN4i`A Es8#X"ˤEZ)N RLPsB8ޭYZ RTWVLBH#MhHYu'{ L5Bb)[P8ӔQ)Lo o4KVvnVJ+-$J<xT2B#AkKoF!u.. THMNo߼D*՛2BB#s[,? $&ʔfx}_0YJsNߗ{*Ttu#vfqNL7E e.hB0hi7ω>Ln3'1DDqs" )1GJDʠffp@g61On@n+h턊LYBDGƣAv~Bb[1"][o7+y) qMg_6`lgveIь I=wٷ'1,KX,_ϭwU֌eϓ! ްrY:SfU{YeF7+Pc]X44nj뢐Lƣi0f06{ y;5GTѝh@V( 3aZkRR0G D0H gyNR|Q$C>pFI&3m~\XRkVPӈfK7U c'}\uޒ#ij2Iy/FnD!t4~gX7d7R䘨NV7_Єu˟SEHBJgؾm$AT7/e| űS[J:TS$RK]N<#( Z8qG$ZxĴGQjCqjS} J׎$ltG_!RѰ|ٺuF BՎ܏#_VET,)z9;XF52VH"ta_8fZ nJƺPjZBi1_9Y#' 1gxLfz뙖$c8(溣JYH悔䡶b(chK@Q"Pt Gy$C~yw֧WOC4hۿ|xWdR>{_U⃐|~Oaݫ+"&ŻY +B^)d?O3{ ^De$ہ%,@1nPԶZ $ho-%:z4ty,!MgC4K[i .3EgqR6w8ќս+s-qR&h˒9){X8) Jx  >gYBPJ Y8B%.6L %uat%\9pSL[0f m}?5 >;c~=R֒h]]ۡDQp5FhoʏI*y?1PrM!ؖ@qOޔ!IvBw>WCa誕|/80読|<d*e9@d=ЙldEŻ%R{aXHͮ[8K@QAqM{L҃1Q*30̡;{#By̍>_·MT|hŇϏ>}z9$mQ"z䣳սkq-e40&TlmMu}]ZɄ+ԔOA68GRK63|'y. iI`^6춖K!f>:\j)4:J7CjЉq㫕ō,6CɈA]똷Z1*!}"|Z'!#'6[w΄N_W.^m[0{spm}sTl ._UZ X ;/[~䣡P6Niv`5jZ>4ܻ}h\R;g͏*YX[TY흵d!_h[.1֣Nw-QޛwgnmXnY6b|wvn@A>#Żowu 7%һa!_6)ɹl}}\X;h_"Ү'H˸"#ѳ,ҼaEɄfQ/~SV`[Q?>l'gXTYXs2w)ў6/̛?(+/js0qlz^aW% Ѩ乢%G)vb4y=jB ,P2RiS%|_M*{Tx)h\/v*5FسVg~qi.[žhNwmI ~ A'dŊi$txɧ‘]+U\^nlrܟ2*[M^èk+/́g:eq p=uxx0;e><Ƽ-s oe/~q88O BTg4fr*[)s=>Bcl|5.zvϓQkTu~}ܗhՎZxҕ^=ÿ}0{_7v*!aČU+*)AZR0̰`RSXYRn' vKM]mn8>d]6ƃzx֝ǷA)u$ڣ;*\"T@JP_)U_*J_[lPD*ҕ^L0*Kb% j^CaO%BķrA:.7xg >G5hE/YOOv @(WY{_& a»Uk& tpnKꨜHdJŵu&'Qg8RBҽ> DXIx?bxD%@\)‡>ls2ޱIkV8D(CS'w<]0QN9x):A9\% `58[`ɈD e)i ;}+(j.5FUf|s9)X鏆v3MAȱe6N'ˆKW? pxp+7RKLF$gb:;?+I݆|&eSg.z7u?лbc:HnLڑĮ[z6, 76Ł`^S4'H%?V(Yr:Jχd@PaY u<&fB?Ov^M?߼y/{31NXl06>?2^KR-]&$ 4?:gK-etn8z4a[ھlc-͎`-EhtۻćrgUt*iT]R3A"(uMQQb8@svFa[?mPJVaM+A)';uo>8# 1Nw_ؕzfv<y%FjTbU1|廽Tq2E}82JWs(./=z;`۫4k>#io=6p#Fa~F!?W;J!9EABU;d!BHqMh:StdY 1n%j6v2#֯ {슖L4'ըꊖT;#Sj&(&I*^%5Ou4LQfN;4N=$XA"2W#@m<-~:(";`uu^pmh[%禸gOݵgW$FH.eF̂6FI6RKm53kMտ~ '!Na }._un<<ܣwy8AaDHLF4jر?u1Agac\,3@6> ʓDYZVBg땀4l)޶=.QH E[pdrک. i24Lm x{I$(j/q$uDT8 `&eͪnoL󇿼 |KGuXŽ]H!S~n,Z(uaWr̵qˑZQ&s/_{d^0?вb뉑,kw'+ ??zdpݖx֯Efʩ.8+xO֯}R Zs>^ hŐ`ZU6* հC[W0AP2¾q$}FqPP:ع댰fCX76aqiI:ɋIDI\@FMܘ]$HyE|/>aԱlp 6QLj=h K w.b6lv6A͏>H';?+p3?>lv0n-!$Su~ }5ҴRzSf]6^g'-7iPXQRqS%hWֱ >9"ox)ww,ׁH;m-ɯ>E#;'vk;rOgcYO: w 2e6˶on+θBT' 4)H;~;iL"ILs"ӫ-@ bc6?=|Bomh :>ZrẂm:X^*yx suy0y[2!xOz. HlOZgמK3OGp'W7K=F߮ ڀ4\5JW7 ѳ=|) %D#dxR.1K'=p_ p}cWv]6_Q-(ߛyZ=(R(%h'Pv?0Ow31k0Ofㅜ b9 35WIG 9Hť rCqt(PE`d?xzXD 5,cNo;gv 05*GMfeҞ$e-7 a bs4#+ hfP*##Tci4raP>zmLL gB-X/&:@fЌA )P|ԅB)D8ֹ(O%lDӵ!jH@ӖL6fO^dB̴aey&[r5<7Si`Pyގg2R7{E] ex 2U0-$t-RM>QT29IIs x00hğ'l_{LCWIdXtZw N(I]t C.it|ae?]7ýPN٘r5M_= I7Wԣ/z{.,BSj?/8k_1oٷWKOwhɊ9Q?5/gs%F3_O9smg!ۧ> B?*S_OTq)0K~lO8)4jB(c5ae8Ef=FbCj%VIV,Xg2x]$ buNBvWdN;(5eBk(+d/lNW3 D0IW؝˼EȗG5 m\$3b{QXHَ6opf#_7"'{eU\%4zPRGXL{Q`-CC?0uLH iG1C<*-pwKa9A]b`zHehIVSs@Eېkm8*ʣ:CZ ւ'[|!'7 4PX=倪{sYYlqJfJdK; ˛bI'#I8c*@\/C ^r5<5[I EQR;4x# L!^fJYeWMZ"[3UܺtjvV#ZiL`ʨY&LnGu'}68)@ Ԓ | ?;6Di0sƛD]LgW݄)%l zre+NPvM>}x ce#i>,89ttLn)V ϸ\.l5Wز^-L5 mGsaxZ1 y]:;pn,LŊx*7]DF&,*UC4dgRs`E(R# g:ife=(GL)#Fc CWkDN"sl3hZ稧/Mr(Kڄ(w" 7va6oŊMKr6;煥gSM{m.:Xīqk, ܪ`q$T?rS6U6|~=Ȃla6+_ׯ^\ ُ?{6AI5\ZIŁVуD2+ s dEsagQs͜C5m]dKt˺3.'/4 S:s2 2]09e.I&g)P$F2xKW4Z 31IaƃaE,`iIj^hAM\§iad-5a:0 ZI;??4oyx%1۝H .mjJ2:%FNJJ. XAd3b6X0,"FteFŀu $߷vSX\/8?O~΅rm1qy4caFRE]Phk"pz{coRNrܬY9'9)Y@۸fzʾ?Okm~qogw,J)OeP?i?٣e{BjCAݗ˿{*kdQWJy9xث&^^*]ϝFyIdpxCDYMY[8'Eg}uos8+,ŲQJJ(R*/bVI%sNVzVZV,b1zJ7I},rUf\e_Fa\YiZj}ycR eV $[)H: +$ŒԊpOVzV uZCć꩑z!zKM)"q[)3eVLFQX&/VTp8YQ[(Œ#ˆ//N3QmenEYer#RYӓ#/,Œj]dhTY)93GaX:=YQ[)+LN9Sx{=>0K'R)NաMS(.8@e2ȽM4L#R [nJH;h$b/WBtKRC.}<؁0 `gYq u!@0A>X>}6Pg"ܟ T=\{6$c_g+JRȾBnCT  -:]Xm9gVlvZZux{dXG/S1ռm|ȾكR-Za*9 .կQmI8qQs )\vwRu\J7J-!дiy궹xQJV !%gǘkEN0!V; 1%Г!]Ё\D޾WwL"LZtNԯf@ g |F<^ͻPK@.bAHo^ İKv ;gI[w1'p7Һ%(i]OigK+\'-zo"k=H&F'Ch2e/(A4>d<^S6SO1Rp/(onYb\>|~ssK~\$4jtL)*R@|> 5/o7DF*?EWF?u-!K1!&XL7˒m*FOz8=KWҌ竐t;1tCٳG#>+U]mp{XV`;fA *)YTz.VR?^=ˢY6FPPsC2 Ȝh4!ɔ乷u/;9T13khuCQK"m^v4qأ}*1>ЭXIo`5GebP fevHˎv́*a 8TSjZDEsZYþ`ZG_IwB@7C)}F"'@e|Pc) %)aeqeN篌9v})aehxQ}AGXki]c5컗)+e$PVɪ["K+u[Tm#:]^KDʋHQuqV(bBIkPI.cW2xN=Pt}(c)|ulϨ.RNcԭVڶnYNK;vF2CEwbl# 9dQjQ?WBQf|L-Or3~Z݀V Ns" Q]iaqݡ遱O"~KMyc^"xUPq|SXsx\Du/Dϙ(4APlq(RaxZ:v 8I-?Y +WtJ('WuzQ:ly,Ȑv>Z/t)g;a.ΔqNۣigŒ[T&#pbhzRC Qҏ% ~!'?M\И|uF [B[\5 ҍɍɬZ'Aa &x,xu OF+P6-S-z.n %ȍttpFŒK{@5=%FzoNvIWT1`||.aM窀 7~u J֯<#\QGЧtPIꀧs篌0(1mAGt<}>&N㞽2:>ݖzZ -@Q1Oqq ԁ$αY{dFk5#ϐh'a5C3$ w[٩R7vNkdp%ȥZa1¸[MWGNE4/wi7CrYѻllF堤rD:~8IZCD(Mt ,t PϨBk[40;N'>/˺觓+BQh>pZr93C''&IpFIa=H3jO>OWo]>gEUㄳߝM+ˣD$"Э(2Sd)(ЄsI 1V.Z b&P MGwN-+.g(J4sY[\#bb Lcv|tbaio" b +b5}zjCJD6erAyYw$yjm br%iXQSșa+2[fGfL$2Ҥ̻RڐT#b(1q"x PP&)KqV#煅4qPDF $E$ۼx0o\Ǯ/wm עB3.!I Yz*$??}KJ2>%_}_G?\77 ~#4A1h/gry;wWW?Bnݻk6/N-9{W^2u~#P[I͑쥅d]&*>\h/^{/˽{WYW¶ 3A)k`[lDNluE|lBl3+d֩uv}Y״qSi]c +)yzO9&$:;EI`V IV73juok'VJ bѱ\m mcIv)(( ,ه"Hݒ$S^:xtuU'h{zԨBG"&I[,?@n7I`eڪ.G膘xt&N`^^]dP,Wþ}q'O-+VC}tuՔ]cv]wۿK+7KJd6+Lĥ)iҺKKFtHҮiP8Ξ0P 34[Rb ac9ime30q0ĦgT% 4rV PRWY44c&}kV <`XA&%΂?A{j}4NaR_iIRR䰀c,l琚_Ѧ\ۘxFWG%h-d-uQN%U' 'cƓ9 ;!χY;cHɪA,K<dCe&s(AfC`۝4#}tdEih"6ZvSmsv2m2uCLC̣D+!yU"4*Dp)Ⱦ !{y|U !16?Cmȿ9-~Đ;i"DÒ \&ˡ]c$[Ӄ|,R'w9d D@'a َC9Dw@tӶN@'>9eI{Kt))߼< "Y+9F)mI bbɷe`0g }#E w#+Z [6 ڕ&~C2g~w6YWF8ӟOgo||q_5IO9!_vRm)3Fgy![_^wemI 4>YIX1I^FlHZj`"@$/2Iǧ٧>ظjQ7()$E e!` 5^s.ǻd 9=JDV&%_sJ[WߺVquȒ&J.x6Idf@X1{N죙}*Bq˼bN8m Ցu烲7=UK8e=6;3an?-E dnq:DP |qyn 1BwxؑhC=S^ϙ!0] dyT6{O ؼ]YCUJsĽ3OB$kY'>yeo9zdf3Ay=V,l*T*~bR` !,C>% Ljԧ74]AyO-=HUWgC^ȞA<󰦞V:ya]WKFֈUXF\N:#1\Ȏml6!S9}RJY^wd irTZϧd$Ҡ+ؔH6R iG#GZHT ?vloFoYῌ&b\lfk@nԵ5JM BEBcZXdG+8|F4B@vc8U?>u@JX5b,yƓW&8g3ټW &R(ۻ Zp)cF~H$%F}W#F/"E=)nP=fo_ksF˞g|#G cP܌'!5jխm{QV;yRR}<%4v9<GmiRjD!%~IJ%TCM)Yg81|#3+uf?WbVY|tG]%u|UQWUUm᧸2&>{ \GQ Iӄ=2JF" ^`b!pϳ[5}P~M~5s%ԾR$5y=gaFo 3ze1rLAM_@$T.Lu9a*O@JY$kl(DRD2#((@35@S{݀ 6ף MAld=v?v[ *Z0#pU8 oFRM^V^N ؓ.Zc3:#(xhPD(jiiL(Ȁ6# cc1n]yn [ WQ:0$=T[q@ X2 O1Ց4@O !*w#ˣ2Q/SG1;|T='U8Ej{@jtK[99C, 42Xb$D/d0#A3t{=tK[}h6JU:}2w]vo8 O:OU/vSco#3^}Շh>']Lr2GFҒ@| ~.2*MRl)({n(:9#3.;B?lC$S=+N;t69 0,0W,z%n$M 3~:bB1ר槟mWL)&N o]-"W`R C4hz2oB_r"%[;-JDw[qD*0("LlMA5`^;`nv$SX2DU {`i У-@=4P!M-jNQI,W8)bi!( Q`2X-M, kvvWJK"dwK1D B MBmGS3J, ÈRBfǽI$ÄInE{xwݔ*5aeZVٌ~poO|l y|U`vlyx1߬ ԍW_L{//捐ՕEq1Ӓu~ F7Yn+; d?lozW)a[7 *KpS0ܔ.L)%!^; X#WP5ZyR[ cZ@i;,|Q*2lE:1v(kjEL^{5JEPxgoK j"8fQר΀' *X f7Ƨ;G%vκ k.w_/Ja)_d ~_|s~z󭱉f2 6o&/!<\<7HU= !p/IH[9F̂!AW) $*zgQ(.9vVzyPP65R 8Wg\ʉb==ނhֶ1t!TK$k Irq+,܄ S 7R\))O e3]ՎtٳNw˓pʥ_oDl\yEQݒ\ Rrхqx7 80'2a!oDsl #wkvڸCp(BAj6خ4pE.hu+W6԰HqDZsA9n ~?6)EBu7aʵV:ђ42$z<;m'ӍkpT7TsL))\5S"Jj9/ol"lJHk\"=0R<=[8K`z_=W}0rc`vhLM]LKtIo\` |]Thx ;dգ+s}1k2O!qsWi.́`|#>A2Qd&#7Bb9AI[/cZNP'vDTrztenc(Qq0?E(.{kfc NVÚ,?!7wa-Ol0 3W[>GKkvA?=޸j7"ڠ.ի:] y7 )r2`t.ӚO"{lјf }М 7Ux?ߞIg \ )tl筤 /b`>2$+RfaiOK@\ y<֒흹Mߞ1,8KGWMe6a~*B,cFEAgbeBeKtU w{(ŏq<=G=cbeg"A/b$f"Xl%[aT?؀-7[R4h[X]* VP+Xl&@֢Rc`d /bkm9)δj(aOu&[i5!aKVl,SJYeJz0m)PV ،ȽlYܛ:%XlZsS2 . fKפz/dA 6%O7&ܐ^FJ\+.Ms3oaG7]ҟiP1cq_Qziɴ| 6z>6~S|ifᕫF-/Bȧ6ƓU%Ȼ9t!McCR+4v.rph.'9Ds~d|YF4{ߪ_T|F?-bk΅DYO'iNr*;HasnYQEF`\YAؑ8՟SJ}G>s#2j (LBTNZֺxnEN#ڨ .iY7=yUHn\]T;Z* nD],uU(V1̀垳-7Fw-|DM3snվXJksF$ my+j$e9t7=!)+ ű'vAeuF5}1:H'ؕC~^'@߻`f k p#xW m#͊9]+Zfy}WȧOȡH' v#o j~>[nM~N?os+S|k\?m[ϻ{aWZڣw)15RB =6{ d LN'X<{JHLb" 5?辮龚de5Ƞ(L^q^sd1:Ȋ>SKkJL֜,܇lƜɏQ.nj9ܗ)ݻPHngkςh5їs3Ep ޙܗ lbf^$ sA-:쓍|xָsCzciUR2$P*#UZZZ\0F>6w( Pz hXr"KGF3S7²y=Ԯk }ϙgŜ*r&9C) Fc!װsT(6S7.@r'%(c~R.X׭biZ dk޹blH)zk3ѽ@Xe%8=f]Mc]U=Ҭ .S~ Tbmu@O`giMSO̥68N)", hXТM=qMiКzҌ}cSV3}bHFSaFO.s6 ;pduκ")zG&[ 6|ORV}v]`3=Yp ǧ/Unq%øw:/-}cXI!)-?+ b9g?$5p%hŒpoO'bJi6RC}Җ>IoC^4\)irƞ]S_C)ipjyj>|tmųG:<$GZ\=7S{.)Ķ>#iEYVzrX{1o{.ڲv?i7@urq;W4(Z!:`? )[Q4:D<?o} ]le ҅;+=/y1|'"[X6_9 tHw.}\O6 Z_/#́\6 hk6Q`Xv H-fbKR7/uMi1jwN]k{Nr"W8p]V\l9[WAR&4olrJV?$.Xk5^ڦ< 5ߒFSIDJQ5ќ}ڴFt3mm)I!8քs)OO5},I-[q L)wL]iD:ք(6579H ɴ&x+a">9& NW7UJӊ#rQYSa}kiQfԤoŀfFTQI߆'$&G~3kH=:4yO 7'ǖu|gVw:Zud M3Xq-#) ^Z0Q@-l{J䝹"}x>d +s ^=>{8$:0!k^{Uqm]ke, -Gd5jvrMNSC.r)=2;Ma|.ܨa )@L"li:P%b 1w[jv㦛""oU+fЮ3 _/X0μoΗLh7 /Fӗ~vnoTnI!KɇeB.8Jn@wSJ#5z^?ۧǵLl՟?ROZݿc;=f/)+_j0g>zTDȵ#[(*߿{G2y{V'!@UJ uP`@VPrEQr1oS%ڂ?_J"#/'=y*mԡ,4%@nGY r:P, 3\Ԡؖ#+QG R #X\_Um(ێsS6.߷򗋗{g7l+x^iJ1-`_KWWg1_Òm#Gȗh,߂}w{7~A/d֖33Y?%d~S`tS,UŪUʵ' 9qM)Ia6y˺[*!6:+5Y䁆Z>4UN)\ VᅱRN޵mȑ;{zcIj$=]9 Vq(;֛:v=~Aop=$sY zsU:ZB~(0or䈥\*zXmXddS6Rk1*(08Iv}/F{od#&8֏(=>YNAF&iO\[0Kx?%*T/.*<0p ASIYC_{QDCG7((t1%g*Qb@1$Sy.\xA>Gea$/U&A.pfwczFw#|cT%ΨqVz'ٙ!ҚʱjhLа%њF%/}3Y?=:+(._b(usp1lp=S6`I6/zi; n J(`wLznΖLGv nΠ((=rGXˠr]<%CȄ݇|@L(F(Ca yþCყ(P{8݇.4>#Φ= fd4ȨnM՜4lҦټY`'GҬ0q# &o~BEѐwAj&rC̷@5%˴ 鍫iLn gK)-ln@ZP +%UDɊCY9h ttA%Z/t Z*U/%BUU: q0V@ IQSS91`Yɍd^4{ ȗ!} C=Z^J\\ EYisUjd.oK0G7٩Bň.dt^ѭi11TKmC'6eDDWl=߮ކhN%Z.(S.bo]z)>mS (mdwNiƻ/fu{miVKtIy_~ JXp1c@ a/[Q0ȘIW_O#RWKB'dn !+l7O#{alFB>^. #&9\-A]U˻0Nha6%7mV="(Z*7R>PCeU<\ޝgyUC^1@A6;Sңu#AA˲yoɩna7VerR $PrR9JRLI@I2s)s(q7D]pS$Ore/}A<)Gcc'(̂ʍJZ&JU!g%؂R&F3E*xη`0RѾY$-TWDPr@;^0Sِ\L 7J1EA 1J ڜ1( RKc)-!ؐ(pOlH2mg[Y!(sgCl 0eIZUIF+_)JXMr31+uC+ ‘Nw(=~8;EK#a֊1@NS':X(65zZ(ΙO`a1 ʘz^ WH$"&5^*^ xqa2WO?K_|uफݥu$@?9 oWxi@9F[lS7INE].r+o CrG=fjvfy)JU dK\z$%w@ģ^÷)|ta/oÝ^AƑ ;M A qgw#.)y _uEl[sMhs61{w6-@W\n7=k q|}! .'Kڒt%M+qK)GLn8-.܁* Bh!r TRsG@S%?:/,9eTiV؜V/MF?(+(+-TZJI^ `FlAJ7kwh؃oڮmno@5qrqK)rV@}3^f]Ey>\6odČAp8YV腀a JBC 1A=z4?|LX?uwnlsuemI,WF[̲̅{dĺ_ն}ӷ[f7޾Mdc=B=0MقoC#p+ PLQ g 1z D f3ÅRwϽm4[6Mӯ~.ql1 VF{/sb*2*s-wYl 0dR-fGYczF kh{Q܃Åcgs羴H8~i}GbpݏD} 8 9n璕ZQ5Jڱ!/e\b/nßTEV$.5W~hKU#m4TG1mЅݶΙ`swmuz?7?mK{ypY}eoͻv0?^|o%z_^ݘܖ~59"Z$^[=vՃLWk"_韏_p^tڳV%dV *A4U4Et\Wus}B떊A~#ƺM2uKhАW$z;-FY-IFu!("fKZ>4USy/Zo+>^~4מۋiyǎ"Sno~^.O. ͷ[`{$S3|p{+mk~hI*Y Z>|ʯO7i)rFʸIE"\bէRIݱ7*{c`>^u|`?'g8iMKT HiF$)q̔{ %y0?iNߚ0;Xy&|ZX1sb}Z)ÔL_ZjMt5쥀 E2t[L#`&*dFsiRٞoPc$S_@nBUB0m8h+^RYʢ0 ȅФzq[h0ni5ÂVZҼ+TQ'Ļ71T$JNe˒ߖEA^{춬w Rz!!`9EܙDJJPZ-4/L fmUc@K\nA"C.Pcuh*ZGWQ/ 魮;3NQ";XD ."Uj ]"c5;_*PZ{of#V :#KYӳMp[]yUQ{qtUUJH!{V).:7YeːX)H%TܾR!oأRASΨN/WTwŲ̯x_`HGևJ%XQ,CA)yY4%)(>mJE@%;BƓ8g9l??w6*uVrY_ܯuj_2 MGo>zi6opMFQKF KJJ)+Yi!k[Th3Qr&ae!T^փJлMF%lY/̳g[VN`.VF!+CHuT _\ P@ٶ`/ W1&temʼnb*/@Yb`2wVFВ)4ڽR. W9B"kKzBuaATZ`FJ9eɁSRڼEɀqHD@(rs%Nܧ\t`_\}~Ӫ `p @J0R+YVQ`)- +޵#"e1;m/H2řE, v5%G\`)d%K[}QLj,V#2PE"4r8d5PP8 ),jRB5sDoRۛ=>Ms͛+cwQ\T|.]1g(T~DE1#Gq"jhyvDiy*<̈́&M1q^(Y E_,S1dB hI ŝ>DfajA.!BhWU;abT.'{4Gyl4M9sdJ@ exM V GqqIbqN ͔gZ8QE6TNdH@p듆-|NC k}t?-&}gӗO|r >W$)o7 LW1TnDDpRTj?"ۯ=?LgK χU>IЫW;C3<_<"[Zշ{W?r $d_~^*8smQ# %-Vr-joV/;1|g}ǼvuC"A#,$tbIځ#؞qeyK D7wl阞VR+`}pB7N_q̤2>n%_G=+~Oe/M/Ǣ,+mP >P.FM"0BJKZG['r -OJ ޢ[%G~c55 Q9}&k)G KF-Hܻ*+^0rJ~{^֝u<& \'/_ٌkr9 .$aՐ"]2R݄ 3]17}(8|P[5G*GJ-%THw!ǹЊc_倨c$hvrk웢¨-^5{;'v0VI(ad0H 0Tqex$Xi1o:]v-Pʤ:Uֵ}<=$OgzyzgG(<5D3bF ffQ4hy򠺮7魙~NDjPVQQX-j!p) 侃:d8VA"V+Dr7JhSڻ_#+z/iԿx,~-b\yFSYG\<{Bs^Hms:N񂉌PA:Dx*&yS+t4d/x]$NQZ3%/?yjsmnowmyck9ds()4O9>@q 6Li_9yxiKA,.?pܟ\.PWJ>t$q6m"c=cv2& %$+RH_-aZZ?2_a'@Skkk(ͭ p-Y3ƚGƸ! A5Cw{F~TD`He6Syq=e>/hBQ\q<3mtwJE͖m<m>/8PDM|fHΙ&Ѭh9O#D3g#l;}z>Gk gi5DŽ6[@Ze+&QB|XG =MPr)! \K+6c*lbž]ݲ>!RR IyyH bQ)Wmѣ[6cuZO 19|ef. "4"}5fI ^^.CAp&΀d};~rǯCu./ߐM$\ D 0* ̻T;1)aHfS )eBi rR,П9Пzo'~0kU6n\O{ 1pfsP`feٛݸB_VnaABģPFj%v+>-`EE}{a:Y\=Eq9,*(3̚°Ԡ`Sa#$W+D0,3ѳjCf_7M-o}E̊Zv>L9g46MC(IBe#'Z; xG+r90ųb)Tz2C3"i8DR1c5H "u0)s$&CRɓyHL4[,=+lk)6K Ȃ1l \!H &T6͜ah).ReՌz @KbgJDgڰDVB@pɴ_L!j +'0K` I 'f(FրHa g-&󖳊I}9N ˘yFT;d.ۂ^V- AhEԉ LU4<,ǒ ɬ0-!ΔrNIXZ@#VOGk#ϫ臨Wj2B3^8y@u<@/~.S}0q *y",1(YjRgUJ䴥P>Za2`}^B 5OG}dGAqWʐgO G jP z `qwס(~=,` ݢ>˧{vfwst>W$!A>~K޾|5ɯ["#(qKx_A~}52Ζǫ>|WWwf:i~b3^|4烀d rՏq|^p=4(<.K& )ZHۗCG!Gbع )8BmDSp%[^ʰG0=;(HƙPlzx A`b6a0 -;V8ny),K,7@s:i+3'(`,%Fx6"ug؊- e(U[2ɱQ &V@OBL9w)Zuu+Db] ]B׹Xb [RM@umd7 *ݎ<\Tod@ ݙ]=Y"eϮ~fw./1ǯƙao֟}fMO+dtl_rWbsrLyտ8)jwJ I9N\Ƴyr$3g` .iؾo}R_t>&۠f E8](e5rEK6k\,A1 x&hz0D?6|jD,a<rf(ө0|9#>7o~[2뇁orb72& P~HAhBrf77M#׃fLM$K3TDY%*K U`FGjꍨ^6$cPZ;WS,XO>S'<_HFԁp:ʝE]`ôT r*|: ۃe!`6d?Xk6zdA`$U` Gu+1X-]Yv@6 trkݼ*ؾsB ůLތj`TgnhÝ m_LG hS]b\3[POopOV`sgyǭsl):Z3I@ ֗۝dm<:i<#oidoqh+в( G ~fppY=’#3J74(dJSKN%T)阅s:MʘgQtpg caw;{ӢԪ:JD$`%XFL鎴wbu41C_j ki$)rZ" |"<;sF{,Y8O4I:| H.^ʒ\e)t 8nj 37:d`&e`_~1 \6}HBzl{8X^0 *k6;+D걼~ʝ3B[xCIFHBkRTHu/}|{Z5ۯM/FY~_S:S`w4VPF+DUFb(HSMXi# R:}T@<{]k,q4_^dMe S={wM*R/Ud-2Q kQ Y7BEzPsM/rfT`Gڊ-P9B\U:A5^:TFdLտ5#J1#Z7#(8\AJ[NFJCrwf2=R;ٱc@f@pJs3|Id>7\f?y笇f?7)( SCP6p"V!V}2:Uӂת-?(5!5GȶY1g``Bl6xW5ԛ NyYATfxz>;(I%P>}MAI8&#< >j cK'H`*+rp0[D:ۀ/izRЬ_< Q1 izMNC g6"\29V_MlsD q.+0<|l4S:wf.yi5=Mպwv7]#[ƨߑQ5!];(.qhss;]A9'ewZKgU1?WߝN`$}CjqWDF9~jT4SI$u-~q4(J[b+& !T98֜SUfw2_!sؖ?؅譱YWuQ~&^DŽA"gh 7Suۏ#Z9TNI'm}V8): Il_IuNI_լC+xLs~Y0-W^s| y }ͧO1* FӧyIJ&91G@,Zw[7Ej: Sx=l l@%aFV(L0z邗kx rrǸ 1&DR+x lۣz|zNѥ{lbRlkO~5|z7Hxj_xN,PůǮϏ>ڿJ1Xo=^DDpUQO%#ye^R6(JEsh#Uy/8bgQGL; |2HX)(8֤:$Dfg՝fƅơU_vvE+Z5:7sy "h'vęDX7t<Fʏ'$&6X+uo ggrZ52MrRTB2ofσY j'c2x[6@iǙ;IO=7z7YwV=!<<h4P;.,$;}-8"a{HQoȕ)-)m* -SaōbP]3AuNts'\P=QE )wBVboO #"XZiTyycaב+Df+0a)[ςnkR0?A1BrĨVԉ4V͙ & >7XpD$RZ'<DQ.ZS(ۢUo@N굍;K`}"kJ4 Xbw]b6D pnm]ՆhQNU~ګ[~<8Önвo쑔bOS_m`ʣ{` #BW_هjYt&w^fPOp(RL`osװí8A#,}PFsEM&Z+$akiSZ%"&h+8h͆F+[(S r/ʶÌ^ ϮhrLfq&-owfsW" WM&ΚM4^5iH.KiVhSpe (Rupȁ"Y\//t#{܋1_2Y\Y3Ʊjq̠/*2ܠ\Hz2' vE@l兠W ޭ{RJ/L!d4>}.}'e3y(˨mhbnIBZBsk %dJ$ 4/iDv?+ITu^u^w7bR ,yd# Žj93$39$&j,f1GPf4Ń,ƚ.gQ1IE$*hOQ`BRX n`#Ex\J1v`m3۰JD5PlN[q0QQink)#Q 1rJ>ϷRT7÷R} IR[VVJHiSK:9 ̄L@nGnmTI*)2+bT5msVޗ\tM+`UIW}FDv-u1 OD6g >' 2| 5 4)|Wr6F9zEP#u1.Kn%Q /rB VG"(!hDJՅvE:p7l_{Vt^iEBX,u9;"Yh$ sNÏ@M\1] HYqI`Ha.-Xpec-Gh>H}pZPvn vs"= ;_QKT-`),uѱ2j64sk:+@P%5Yx|m<8Lz{ĥC#n7r:TaBQjMfeJٗ cz.sAI~&F}3| w ?<yѽ>&oÿyr~\bIq9=+*$fOKϖ\>:=G_W} vvȏhGo`J|QTzE»|zq[/Ϟxg.dXZa'95 ^d6,`t{PhE;BBRRt`OhM@w W> fg=k*omk|>V nCJ-Y:j rlK3 Pu+r^KϟJ5;!%,3 Poɝ<π%S@smGXaÌw>Vy6-k.$|Og߫EJCd\=->dY4/ΠSP,%N{JEJ啮~D5HBa6Q*`*&,gXB%$Vk^Fq'hD4a#la/)C1F{sNr@yk[Ahwdy+VmK%*i FfjA{xsVSG]U^O,[@\Yk:/3 $ 0gwnxzp.OFG spTsԁXߑ\l0MnC{ZnD y/Amul-RA\@P1鞰q4{xk)R@9_9y82j}-[L":HPluWx A޺2^+V :xڧfXᶛ+F%/ܨd=\v3Hz[䓬eey^yf4y=3o)Ձ1UsᘕG&N[λ#l'[嫇ongA-OEqq9B0sb:SU% qNPT+Bh"AAxPX$˗_6vvٔP]uJ$(aH?r].@G:%B?jEA$l NZM@).,s"h1LNxo(c}Lšm0ᬵ~;d7K05o8Xb qƌBSol0*%]DEJ ĔT@";oyrj V APZ~Н1ilpHYB;& z>|oNRA=<7b^ޑ%L|0\MݢXƗy}'ӭ,knw~, %A6녳nP:Ƃ4r6i:;]3v^(ip[ӿ_Qs o B />Y$$ "H;懠J11h}nQ}kj.$ "]vi݊;SbkK#nE.k꿿BILPl,}PRpM2V2I4LpJ DmK@+j5ZQ ޅ ,n%0"MH\Q+*O4ZJ{dw ᯿&՘ryE)9t6t*NJEhLQrdUIAb#:cnOp{j.$ =TsUgښ۸_a%';UaVNkl^NJ`lnKDʎw+44$-&NlAFK)GAĎQGETr0n nMH+уer,U2Ud@2)^EXy Gw?ȡ#_df1{3tyv?j54Wk `M>XkEEmvzi@DehX):`U-XLKMWqN` fMf_ zkAHKƈW@1%K#eRS;k ei57dÌ|@sme` J]&6 Ro@^t JsS[! g'N4p)~9G4Ӏ; ΰ?d$hjSKdq۷֏Ssk]O3[u5dVI\ө]?Yے\.Dyzoͳ8Fo,T3M>5 {BKJ`It(vpɎɯ`E)jB楢Q QITEWN(TҨIKK^Q|(=&P>tB93SIcPfҷ1d vM"%[LÉf̂~& jM6-g|oE"w$q,|r| 7>qt?f2_ܥ>8y&Y>fœ14{'er }u)%(&W>M+/rOTWU}iQ&rdi-o':Dp#rheUjuPI2HQmEH}Y}sV#>LD |qU/%[iqiOU3\$9?(*Xe'[dΞĭQwrqQ m0-iT+ T'Cpi0ݾ8G#;\ :2OHnk;Hl~'èdyH&BS(HQRce0 [2)C4  PyYڰE'}! -֢Zi|MQx3mEEQWe?[R&y\]}~Vz`ƿ>ir3naU#"=? l"Y >IkEX*-a-<  \~ 3Wg S]KW8Vr%Ν#=fL Ռvο]oׂ@q|6"oܒ GQ1BUؕA80}]$տwZ Y.9NsU:Hd7|ph)EX6Z zxkM T)ºk}:nxsfVH3m6vG1#$>jԩo68;ﺭA2=۪Hc<iڹ iPsٳ5۬;'nȌiS]֚ ;wC4xw [CP) z8Zѽ5 Ø 6%BwR+fc&͘v($\I[2Ɓ,cDs8XaԔpHdX~gM_nCSժ==ɑa0s%$Wݚ?ML7cvvID* Xk|mR3sK`.qtEE&YTNt}^K$Q$&DK} f=O1]qVFYIoNFdM@y*97qTj"+P|H?&<'Xpo2}% %4bZ#T#0X2KtƜf1ƒ׿|Ȳ0|&sDj;7"#5Bزec AVmd5I w\)Uo?>~h%?ZQYǬۢ26ȓQ Imnlv0N:r?I!*LtY1<'Mt1 ~R){ay='q  Z2cFcyJh[Y 4NE*T*j/pL SbΩfMƍ͋gsDUGL;s!I#iQYj%sL\)Q4&BpE{`9$4~=9 (zGRQXk*JoAXB\'C¶B "ҊB{0}5t5WĚ-c?gY ҆ˏ5j.w&knY92 dңgMDpy5B(Dj; &p UE( 7M H'5#[#wRs-`l$-# #B3FǽkMS! M3ؔi0hCJz?W~ z~,yxmx(`me qGN\єvٻO?Bݤ%iilkAp:1G.X{چ]-c=0RRt0R7nA qmT)k¥ܠNn0!|NnP7?v .[jc&-9sS4rM~|C!O 3Is\IBb7xu qف:-~ _ViǤ~}UTg-A8g;*9h nϏUitx-{cZZ5!xj?PZ-𦋌<0ڹO!Q'Z 9$`ϓ#0M_rTV9Sb(T^=*oa+ڌ>O(U'7?){{qSC;z`|@.;^Wł*1W+zu_ڒ)_ZK>z/w$OதD|+XqLr8z]-B x累̵"H5kR*tNty''$LTѴ(>xkϓM &;ԫc`ihwRRuKX~+tFf8zdzĺ&]?I_k5|lw 9J-1fNE1x'$zǖu#b5R XTj2n9BRiF3#$k"F೑c2sKjN ކ-}mآA3:B1AaqCqX\iP?Ўq,3`gD"E 3͕yzJl3bUuBb 3d) 8,EƐ*"fY7EX1rd AX ҭa*bA?G?J-DTb3+Vaefך\RL.QiIq >d-ɉɔy<ז ,`F2A殺e$(LΡF0)rd@$qL XZc#MHخcs.QJa38$Gr6*~qύ !Lu/p(ヱnLCjPӻ.$8?v1.)BZGނuh k{Xk]>_ZaFmy2GnQ0+RO x].V߭^q a R\]}~Vz`V J@H`؛{x :Gޜl/ g@&lO+,ʠoogT?-)[pvZö|AdWft` ^T #wq9"R&F)DA8əw_n# #X>;`Ga, Mw1i19t:`ʡS7OSl= :0Tڝ[Ut &{RJ|o_{/~%cICcs{~=5z_aKcP+{8+e3O ޺ޛ'E@ z*8<^BZ<hu"~:{*]iB N]0r̅)(!\]+A[[sjA@K+C#w=rH`w6KdkCzPLfJ=ӗe{fdyJ6ԭ{d0de n%?]i;csv c<], ;gKamsGc%vby!v*dw=[PD~kJlG﷍g_,fNYKGo_ O62Rߏ@ \jv[ݮ>~ b-8~%C62lJ ^;x`o.=\Rl@⨛ڐJttb!U ]3[;5'_orz 0;4lj[?Ф~v8{A=Ksoura/ n64p7%te+[UC*cL5y7ML5/ӍݯKnR=<~z Wb6譽B폪_ w- 6/͌>n\f$mke:#͖˻U_ _}J*h~Va3B'1 U ָ3?̙ 2E ͍!NL НM!b){ު Z7mlPw a=_o微uqw AlaƖNw%U cԾfK[rbuu7#D1_I5^B$BS_J*f!1DÑՒf? &а(l%i5[_]ma\¥ V9Z"hͦs:p j*'sTS mt{i5gTB <=}0JqCJ:@Mj&[-] bqkJ*"* Tj.y~o]=̻hTx隸GO׉}Fwq8eݩ.{z=̻hTj<+u/|ґ$ݮ1M^˜p$V /%5/3UhLPUƼeUms7IyOT[Xbhu8UmT\H2Tq~䘈8]+70/"Mȁg| Ti6%19޻HHA"yqsh(ܮp#RG z0J11eZY&=vF:zk^EI5%zW%s>xi \ckLTGgt:%‘nYJɇ"<waq uqoPIOOIED1,j3J׌hJ,D**%S. )u]2TJqTiʉI Os"ظ9'HFcP#i$/ƪ5EJV3JF)1LY&H$'ŖafsfGelbڏ=DY_=1O+W (\|wti߿'s|y|}[co]kkL&|f⧧8Ի+-ΖOzM>_]5Xzk>,H c9g:9E&Sj%XMqܽ)4rJ#HYz:~t|pHOǿYr܎(7<9 + 繑uuaRe9 *d'97|vBR\„dMs8aetjL u,W҆3̱Tt8n4 c;n-Dj1 Qa[:89_[ =MZ/SPQdݕhcae~N1w|%4BØQ/_nn8pz(0n.x/>LQGMMUEts2GUU*d{&5Nq)q `S0 7LHY@̵Pۗ\HY҂(BEM(Ҕ5GZ~2Xג?-.k'\|\WO8Z<<u~3Y )2SN wUxܧ^4Lz#n^kz=b>[^ MY׆Oe^p2k鮾+Y'>l~lv?_El>1dd6-3 Wfb;tds:>JJ h]^Tvc?bwH'w:PKē:~$*.]|m^, S*M֣6n#q{R{_f>fwũmKOȶoag[=@%y "3SWU3 >y&7Z19q~AU>_ĨuBCpwi 굎_WoeF"5=Sԭ FMܺsҒGj`䭽A֞9]'ܺiu YZ?'j z`-),I5ֶ t&ˤpt'&p*Nw/ZG 51f sKyͱƆRF%\Rh ^WQ&pE1ٸ:"Acnh83fD[iLTtGJ.u@F%zQEmP^4/? JҚ#1 ;u%c3Cԋijw_t`q̃,wE[- yEMVTD7yYYYi3T*X1o#w@J E Z9dv3}쎃4|h>[BV-G8슚 x-97ڗO~gDYLj7lIPmCuJDɰԪ^f'aJ:Vw[h9yģEo{9aQY)IQ >|`.ˌj QTk9*˗|y)>:&ڪW}sN \机嗹E-%pK^/@zX$BPBtVo[`Ԕ>t;OTW%9݃#x+Pb/WԈ_֥ sx+n[kAT/_lp4n] l2krsy%93LUd5!Wh-m+eڱp3agHԽV5Q΋ZU8UUB9Ƶ-K aJ\kh`6F;u8O &2'jLjsD0F3܈`C5 QJ0󊋒WBr-K4eogj{q_1kGo$0}CCRI'k;m;onU E=HȦkDDu]#*'_Q(<9D]뭗PfNZ#mT MDBo)c)L[Y 4EZ?h . 0?N^ՃE-\ ?kCAfʝW#&dעPI#rg"Huj5m|D 9wr#]UcWavNODڹN$36"DBԑ< 0jVM,ܧ' =!Zt z^-~*gJIZf\yUur/H<.̂*M4n3O$ r9 `Ι┼])6[I_^UTs;"+Ӯ@ <,=6[.1&1.'ص;+/~yWH-+$^4xW!Ŭ Xb6=G˜jc]]NiV_{@'[.KGi ;ry;=uxQ%h.,8;3geq]ډN^.^ɴG#Q!tδh1fTTs*bP OϘ-2XImm\6xe5'P$lr 6#O;ՄƘ@FI9zNzGXf=/W(pܮU5N'-Z6Q%z83`a3и;ɜPH/2^S m_Da}BH ydE!3q(ŋD ^t#h @wy׭!5h|$O0t%SUN$[yϋ&Zw pYbOu u:f3ゝ)e̒AhB6Z_M٧?~mTڧdѤ#K è,n@wW 1'c`&Xl i^4?\f5Œ:B?RbI!J齖_پjY2flH#L.0!LPQ+c yK1.d2NuK٠ի;|q$( Hf [E5AJ}=HɧZ1òkO* {ʃE cʐK+,덅:[VeM䜎Ր _MdDU$9\W$Y{q_v({q_4"ɒ^*t(]Gďa=.E#iUodڰѹ6Ny7e̻92nt:ʠǫ/IIF3 "͜bv&$Fdvq =R4[_ CAzD-ac]o6DbVґz|d->5;`rЙ)_#0:1z֚ev 2i4ӛc;Tf&: EJ;'mmςʓcenű`[oIC,yr!>RgdtXͳ(:H˰4ÅUqKds} '3$m K] 9T*2Frs^eQ&<%z/9d Cy[ٽ%1I3#69ru9ߎ9 IpZoNnT L̬c;*Ob%|p.b%&{@"^ [zr\zv>W}q8bXyzd?Z(άr'lt!Fӑggi/ְxzVI")O?DaGA.o;(f9(gA>S73҆GO!;1ǖ;}&% fw8\B!qfN"_'kIdI S1N g#orh6'}DU=k'_ª LX"5H_\Eŝ1gc6lټm` ! a@֚("g`aNFJډEK1itR-Ǘ2o\R 4[/@)ǣ\2!r|]ͤY1xܢ>Y z7\ikU|/*ّncjKeߕ[wQ! ivuIHL#: a1sg EJ oPW}*!}2m(lCxM~ eV1ax4B.J`}UpZ^S`"{h<࣑ WDܓ$iRɭ0\3C`yJ`֞wk `*v%pJI%1 N9IŘH&`&-.i 37<;QG3R0vA:{:kL.j=OGtkA3y,䟐kU kUO%$[/9{c~{@c|llv7 9ZO./ 12aðwaZn]!狏!Qs:l03g?<߽J~s~ryyq>iҌş3̥;3U FBrlç>#`0-ZF/;~)( ū B9K ݭ΂94QIsܒ R+.ȤCA,%``^Z$NhTN09Yમԃ١a;Pi- )<{Y=K$Z.y4܌"K%[u,`w#KiOGβ/ͧ.%^V ]%T䐮`+݁nh}:zOP?A" *;՞Q?lW~IHԻI ETt/غ6aҀdWE.8zT, RmPQZ~z5a;Ь TL*^.z^DO08O0zr^p5g+h%&|ot߳\_?_nr>dp.5y4‡/GS'䰷Ɛ`rPH@VCrRz<;q[l'G۶1aG=3jh0p$zGd?ɕ Ƈ;y!W oAֳv~v _ŵV = O,,݀#>HWnq&}b3Natj{S 9M~"փZ#阮;pGv +_@M1CYMj֚vve-wplW$8zeՈ//=]&3=G=E)o{çM%M=9 ν UopdQ; O^gw/zz]2x, Awy)Fs9zn4;qwt]}{n'BY{]NU\s%uhuz'HнvpԠc84-4jgHpH538h@tĵ}$AͫWW8 |:ĸT齕G'!իe>1O^9FC}jgzFQ6۷%v5_|v7IߑfnPnRʊyY~ ?} ?q^矬:[dͫ¦5%75 :dwemI 3mNFaB +,{e&u2N, @v4Hغ̬̋E<]f ~(1Z4l[*L#FiʨcvAgr ٲsv1ِ];}5*9-f'%hCvְK^l&z׫cqjTݱ:0oKCyQJK\|-s _å{&_~+(LmnCBVÑC[ ̱VK96,Bm(1B 歗qv7}nWrVW|P#lһ߳>.o6Gy<>zݼyY|&7ْn''~acDG;v:~9Ew|{ygіAZ+=8NH#ŨQ<2ag7],?orVEuGV*iδ2Mro'y 3Ԅ{''b0Xs8]j>>$͊+G?NP_qoNOvv2!pU>ͮ߼LkehggĞfztK$E0oN~WpWIU=@毗k>9-[e| XS>rWZD/|v(u| YoKbsĒ3>X5/42} Zav]A=,j]^@~Xư]546M3Տ`T-w4r>ҦTP&:4ʇiR8o{PejҲR6PZJUliSnao62p`,[prġ&k#\;7 Ck6,ޟ̻kv-5kSoGu1nNߞ5 SL@Ixg(hf}OWMT@ t;5 \.OS~: n"5sMz lv%ڃzlrP1ҀhAR;YI2( zl>EEZAɐ]i^-(]p|aXD5N5Rʭ{]߾V=JĒ Kh`{P)ZQSj$O1FJt'(aT5%Tt1 5:L!+uh܈+SKSSyy֫JJAw Vs R6ՉPFAV$PiFxOTQ*3HX`.ij-A'*SZWA,nG[!p)nPMLRšbPpI:8ׁX/v b\LŬnZ?2.8{ۼ$Ҁqހ,-շ(C\^EB9=5]_ Uj-8N$r4Q Ry"LzU֥=ݪ"쀔!pR<Õ06dTQ) GC-R՗J:ǹsWE?^\ϾarS -lI -Pӌ-b̝S_\il'RR:Cs5h7D0r؎RJtp_ Kwt˳Y-yT4I[(ih,%-WUf~b!ÙvOiJ*-?lC԰N5d, Zs6j86/sA-zU[In8H:.v{*.d2& TuIIڷ'ti[]BW?aC9 RrbqpWfYjmP$uA 5^PNJwkFn_]k J+[Tsw5c=WcA LE&+$ΨĴIa~ i * }ChS}>s5A *i IX&zF ϸcS.ZY#*0kBrq^笴nxƠbnдK343\1/,iP!:éNxIeVξ5e]6 LMfR|"BC;;~Hvyvk-GrY"}H!4 w0iXCb2[ozX3ko˵x W[8 %#uN4xP T+'}4x-6C*7$#ﶇ| j5#MF!nBhrE`>AWKv*?G54Kپ,抏x9}헯M~hhz>_ 1͟!yiqll,o&WE6ڭH9ѐKVs 39ţVDXz>ܠe@|G},z7:}/qGޢ8bLfgIhXnb'4y9uK3D_ HOsi\xZ.w,V9S .)#1NĤNH\zT1:,h]uVӇo- (KlmbEpQdޠ U^gl@/#m08 w ќ+áPz$ Ag \(J>)i#XSR M@?}>^F1KxŗO=qP ~{\"~.5c h 8tހBK)cL[Ʃ X?Mtkh _) *& &)hY!`aRDPPP*<8}#y[I<J;*5K&x9\6tˆNsilaZzD#ETҨ50iy$I>H4}*H 4mF9_UQP :;<ק9_Og71K!u~t)˫GvV5x\_>-N؜,\ʄ{-GL.P5^C,-C#_ގ7RS?ܞ#I'c#B?9w`|{ygц AG' )ؑ B/7Mȵ? G)cb}:dyP(ڮT_^J+LJ5Aiu_ # ZF}p5*A/4d(gV(Ƹۂ3Q Չbٸz4j;"&feyaJ;*JtNXOnYy+2Zf)ZΒU J'jG.=r8 |55ALJG ` % (UQFCB]zG"jҠ)EH:*A9If;"Qb 5"K"cF/VIkFfPuZ!%RwH 3\Ȯ-8Hh-Rº㺛Jjlŵ-]ygsBj^@Z_]s&)PøCcY.@sڹ,a]n6u.x1a-!b W) Ặ|qM +a4GsbFz1,WF j:[pԖ;3RLªИBcbJ*B[”pixB $RHĕu$E !=Z`4S{IGy$l 7r@m$Y/U_ ,{I0> Lc^Il~IY.jb2cꯪbMug,OINWj7zx^CM~8_/Pad89y{qUhhRғ]G5ZW oy}x;٭sl]@;؁5Ի㎗рj"8/T"Z[:/A}`.7&}Gtފv) (PyQ@Fja@0В%/U~7@wB6!c[[!O&Y_7F'0A9.]In[΁ndFp:p:/ߎT`JIIDP0;s">No<4iO [M:vAC\3}d SCڛb nkߧ;LD?9[gRY"M({O&ud_Lف_F:~M(VWIG=`2Y0SL*'*4J!qh۪e֎G{zg h/ +P#0q𾹥F\++i NGS<[{ztu_'ֽO_GfG\)&I'NQQ* \Vԗy&v$+$$+"J=)&(O&37E/J?M_ٙ/tqvkףt]+czUŴZ*6κX$Iz(BEy4nbR[m8~vyC9+4`pʼL<b8Pa,Yx_S>G3Ioh(P.oe7P c* 23ꫯuڵkd_A.exQ(AZf(wʶ)zQ޼,TiG!+n H*sQUqѱ|"k^zdW`V)=c*S-c A4ny\iG#5hqF&PBt (v_f$ۓFزJ`!$v9;»f!CΉuk[׭ԗ?f3|zyf,n-h뺵 G%& ?zڀC<Ёwuq&aq>yx8 ̉2:5acgv$9HBFXנ1!JbTkLp:%5c"s8ZEBH?Q/Z$u)&L'ѫSp9;EEh_E >N^k7bad0nNײl9:@=C3 'aS sRH;9eO#AgN8ѼnͰBuV=ɉye0a3@^heLFD`f'7QvJ@mRԬlL>F8dO ;Co?o&Ƿ_<_N}1 Nb唜6Akr myUCoA)8ԧh'_F >'F@RP)j#F*փlT.d ՇB-2_m;)9U[8ރd!@C&ڣQXˍӆ/N3tX0ws6&W!͸|y=W*Ƿc J+ ll!7Ru=tC:6¹p!\4=\\Tj̍أe@6ux4idu4q1$ad=<8TL׾ydӽdGքkG?s|%#pT}Ո<u@&S* :,:'ZNl-e_M~MwLn)ՂzwVLpr"|MbYdx:E#p?Si=oWYSpyjt ?ަq&a;&Q!!ݘ3| M0DsmFT943RR"& 7P\)V {3f&s4IR%R&*!(E3BIfsa.P\$Q YiT0ʭlVۨP!\A@Oigi5_\[At9RV:Њ^qh-4M9It6 R~*.]-n- nθ8t(<8"U7ZrYi&Űc$"*A|U;{A֗9Vk<}#R uko"MⰮǙ6iAc*cK`F%zWzHJI`Kk$#cXhV|~E%'vieJ#'RԄએRmzrmwW$>( T.v7 J J4QQ -W m #T4VHNXA2\PkwzZYP.Ue6BuVv ?g6<T3NȘ&D5cˣ y{}76vKɆܖ7}8^i*5`.(A^w`ﺿܤU;1X%[1:yZb2> #b y"%SgP-&UCnNMOXS_E|TvkCBF] J/>J eOx[ ^u`6m#"Pcq(Նk߅U;T"/N{Aw 1I= |ph"wBg^rWCajhp#Iu[, +ݐ3g}adFVA[|]'# 2/jj-ePls}\i>hMOG?}2 %Pʏl"ʡu ⹋&7@s )tm_Ԇe\-J*i pAAT/OO p.R*oW*+.g ɨXƯĺ?ǩ/\d~XW~M^5yUuM.`g۱M#fb T#if T$iFULRun/?үG>QX0h*؈O˸0ɢ?\h՗#ITigPYub L 1dFل.Z&yΩ".) MSrΫ+*u?g:f>?[\+?Z_Kji~q˷ooQT$5M^մ$E-&{_*X36'\ ˮR gM M*dD +oLMFgu#fCy=IgY7X!ljlL!UԺlz#ޱ |̘S"S2Tn KQ$Bʴ@mFrmqD+s{_L킒B)y`f*d%2K(yJ˽a~ctF)6T }/H֟ Z-ڲ 3ʼ_͵K|7})Q6i\.Y|8{5C+#31.i˓+C?yʸѢ'O*'iP7ENm߬Fb˨5M^q  ͈e3tiut_qQѐz@eƩ2F47mb9GtTPC.BRgѺ$SZj $I ib9MZ\J]ʌqD20"/Wjc)a%{T)(E 88AA,.LhBd2+&OI"7"ӊ_[mMߧ/;0Q3_Y9;gK-۠]j|tIG'gvrQN_{|,֬Iybx `T! 3(7Ж*@A4ӧvƌtq()Ȝ s6Oi\܁Hs )MV4T'2i 'w3I2!2N IXcL9 2+SvRjp7"1Q%*֬ ro@Py@0^T Uݙ9Ssެ峪㥚 4#]uT{CEq$A}"hԢhWS_օ-H3 ޤn FbTj+rFU_wLA^W+zZg.˔ R΍ٮGB8,Ka1tYnC*m/徠F ו~?n!k}u) vsRH#̼iGf(SW>^˴ǽ3k9ƤeA->MO sb˛̕z3?[Og4(?}2 @±T~8:t)8{7[*=udT?RNR29jڈLH9'Z  yC]vH)nEHl@NU/sHQ͘*; [%%TfK}Qҹ)'A -Z5E`ڗ QpIsZp1\JNr9spfi<69*%i\Ցp;Es9j0Αh r[^lkdui'.Q{Ӈջ>uBk-DgX B³bhAx]G37#ېg.2Vm>h\9rQڻj4A5)lyK.7^IwWՎ]xV2᩵,ɷ1wgF]ݸ2U]6zT3|><qInhhʟ&۽2/r XrIj5(٢ll'S{DF =ej0i7]oU % 50#{7}aSFt6D-pR/UrR(7wP <-.|1^I%U@r(F/=O2!P{!P{@ op(JJB KP[rύ# dJ|9| 0+]S:,+qŸ\f Z?1-Ytz1aDx=bEPzAM~4y]WPQU ˢ 9G .֒BTFJM«_ RJZQᄁ$|O4-DwKk:ֲ drpɪ2i$hIׅ_*B{-&)eȯ(1vlԎaQj^J¥%* qPkQZǩ>lRm| :8F8K(JghB((s E!8 Jz=*/OALL]9EG%ѼG-ztPBi~==9LjeɅPaN֟J/\{7E0&jnp)>ljU{(R}v%^brek)k2xi|s:f%7//Zș!X<X1giNDg)Nq Z;Qvzdx`K;(Iix5"K `%׋ގZnZ&!]4N"T*# }К{IchMY-z:ub{{kd4|fs+3 226wx|Ovp).Sܙ8y(YO >\NQ߷ f-Uٿ'.߼kA~DVk%, E2CtDQ h 5 P1A͕3Ի듑Am(D?q)$3NRZiMr:V`zCz1b gi#\2ck@; hO/;aEe0m@ 48*Ig]t,i!LC4o=5iBbN]dZ] U)cB%t]v,uIhk.IǤ_k3}WAr |.yBZ;CzU:WU4l4yvI1J AE Qiʬ/ެ$h!9VEO;F7>Dh2z lո$9q1調jX !SFns#prU"W Np#IhQa 539C %l%i"xPCGw\r@ )ssf' KY4t,Y˨v<ByT8&ʽ(*d1`9SK p(L ;d>c.ī83#}i<52%rr%!i=xJ4RxeeI-8H1gDԊTHY(R]Qms?AΖk,JrHZ"]_ ui$#DdxNQs3$[̉0Z?F^]l-0=D+.\RIX;//#Z?A3!X3>ԗ\XiK]ޫK0eCJb紦IN]:ũfrm A1m_؍ ]Kb7|W_Ћ뼘gCK\:YRiO!NRj'1 Uv)͵MYdfj0Pmt+>Q07|چ(y^IU ,カpR+Tq1n@'dCoր!w<~gZ cb0D1oڨi --`r}Chʎb!7[w%nSZ}v*x>R(!8*r'flBW +QJR@kьPl պp)wˌsԵw6@3r-Si?I.8{V|!*OކE}$TUlzkgG'ڻi~$ Dp>Y2yMk|\KyC'WN!ELneWU[&sNq$~xNTj81}Pۓ],1Oɤ$*`dz˩?f}*hI??hσ?opwӛt)[ʂed8HeiJR εzsSr9C5Cϧ St15VIkW6&N !}==4h䯖 4qZ$J J3}ZVՙRzVV@RcU XL29K(%3dy[B#Kt)M.=X9+DE@2 0!šE*}Rb^Z.N}@;IU"`K0GhVpKrKp(;C`Cnm4 [FЗJE `SIy$i< +}FPY3JM`-UarϽ&%zB0ynglnF3pNJZ:m)xQ[l!%&`Q%OC,\<ݛ fLis nBYkiQVm` HgAϕ+KBD 9 l%C̔ysnd^qy_?m$/ y?\/xS۽yKyY%R&)٫7$nj-_7{{~@L6Rȅ 6nJĒm,*HSa}(bB%h%|?G~s+ >N=o[KLЉbPQ`;]Jpl0;V9 K)6^ZLN,mdUnkQm寚g] f A\5{e @?oÜͻ9}QZh߲[ q)eN%<}z|o7ӫ[>sJpNo˴fI>ڞ2+O!-W7 lFf-$ry>UY[s6x99n(@F64o&Vt3fL ZUu]`e5!{autnfeh= տj:lc&D2 ih>|mQsR^P֑;ySa#Byr`877*:lQQXbCߎLU!Y;[ !u1POfyct(:cD sDYvpá=<^A3) 's (!CYxnA~:j=`Q\2@d[f~ 5A۫SlJGӎ9qy;[Ү/]=ʒw3zftiXv6]=]S"6دY'[f~E037`w(QUa`FRK/0TaAAphNbIr+e],FoI}P҆ԇm,` `_feR8؁.^_z/?XtVvIzg{YkV\d~bG8R`KI#b9hHYNL(% FځS>3!#XzNu= J.ny`G{QMY)䔲HYQ+O!-kS;Ÿ}7%ɧJ~UoӂS#׮[ވCQ3dVSQRERܖDjCozW~ߤ7>ŨI@?Lﯓ@~zg>,Hp#8i3.; 6[\ݨjRt]1/B{[ՎVPʐ_;S'A:C}F@qZ׫?<^fa)| !ih(F*`dEiK$pX?1^v75;ia}S0 ް):oBܠ5Bf_~}6>/ g%V Aqk񺐘p~QPe֕}#Gv\r`syK=&OOޗ R7Moץmayo>@~t?:pJף`2G-.@4_cl7'xQM$fWkOѝZțVT&1Ѫu$j. Q?aXKuOyiyQIN2PT\c(^ $I ; 8lLgL*D )$=1 0YBh6#}$$׈(kŃϮVcxB v|HϰY,z[{<Z J95Ȼ49G|%<-}nix4~|qwOVYH3#%q0?Ow;FmKLGGZ6SmWuS[ye:͌:k"$T;Wv1sJ-xt˰4 CƖ)NBtdn/md4$Htg `4#l Ac⫝qFƛu D+k;@X Glzbq/dmA7yt #՗e:-H1IPJg(2c>b)Ɯ_F(\/Dғ~zvTw!3~~>Y)/k6eu9K Jwqٌ]Po~;=d20|+^^0%[TK13d#nGK q4IBw[G/S2G\ڌ#m2o(bd 9o>_]u*[Ϝj|&VxpB" oOER^v<ՄNMAɋnYUA>MLSSeǡNEJۿmaZ8`ό?/  3DfY9V\Wi=EDtrMN! \Z5}x-́OpGg'B0o@eb;Q3NIMTTp.Ͻ#8Ä4w>):2",THJ$nSjsnJ &S-CP hʠ>B w܌ol訞z}}م=3αe;q6SRL2r |XE_ܾۇ=6XmngϾ9zBP^aH}1f΄_dr*A9p!cGLY/$ ,`Z)l ^vd  Ih8gZÚQ@1,b04 @?PrGv0^d ٜ{-`aA("$yX:{uz!Ra-2yu.[cI%]oj) ) {o5& ")mɍ!9e&3zb&is<*Q|ږ!`2Q& Y.Pvkr8BwyoͱX53AwڊB묅8tx߮"͈Jcn^8x$0Ľ L5,7gL&?/C7x VlovzWfݿ ̱sUSeD Ni>C?a` s5eH}{qpǻ*] ]"3qgbPBE5 JzgQ$G t|ώeJ=Iǒ^Zeɍq)! Pħ@)#?vD8#s?bbp,Z]s8W7j*qk[_BwW}AL0eijY&_U|* 8g%8abx c 5xKf(F6fQDzFb4QKD#9'] ĔagL+In) *i⸤>UY`Ԝq P;b'FLONu]ֹ֞ Y+eֱRA):7S^~/{2{n΁<80ǁ)_\!#.~}(J3;ЍGYiEE4?P0ߣ &c'G$ eE}2V"x>cIM*܌~LgDuZ%FIT,W wwK tRQǻnQ#Tμ[@c['7BdώW}փCt pR]:(  1Vlu 5 i95p0GXa2N}1yc).T#wFe:liu_GqA={z0l:# O'RPM+;gO9*gSy:Zho!T65NvaZvhu)D48241Ʉ/aU\Qoa$2ZmOtί\̛śN\Oab +)F*Rۛ}̎9^FF [n!8]"!d>!3[lp&i!2&y/vwP鞋yV.=UoZc޿veo`~>@ʛ̭ÀWYUbnT6Uqc\B.N_Y^bP`Bl .ĦjlzF`ku 3(DZ9!ڐ<>(>|>dE>R5ZLֱnH#r`tDc*q8qʿiƵBTYsLƝz;Fws*?G@񅢸cR$ !礑72*mG3|!B\9ǵVTi@G,"ZH I@2ғ05A 4D` )]]o]^snXc, /;Y$n } tivB.$&e/Q'EuB;%h< $IcL;=fL+\h1F3 8EkI@KY9A!o>i &$ 6GEp0(W;A[XŴ8AaW7J7Ε]VawG'")0;NSHp`0qΫA"WQ[hzILDT6,$G,׺׏K].V-"ytIVQO;>V#0#g_}Yo'u$I^N1r?;y Odz8逶 7ʁR:p'v64EU%sϨlnTg%(t] "KZ8.lGE .i"E) FKTLpv2oTQ9"mb\(CnEs`OFY ɼ#%*(DB.ߍʂMk#$X_y铋_fRŎg\bp_!d:ϔQɷc/Ov;ՋY.GixԱ~SD;UŮdMy!KgՊ&.X}H-վc[j[&-*̔wDq тlϷ\SZ] (xV+D}u.@0?6lqf&Y0_DaG7MwMpXMפ|,Π#E}/$6z*vR ($ Z hNp`8iTNmLJH/1a3TP#_$jJR9is~T0`99PAN&!gʚ㸑_L8"fv6<3 $ʼnjDuY}.tfK L Xt~=t=$J1*ͫ,ZO#6`o~-K5o?5VxWnvsmE}*&$^=Lg |x}}u1/TNyS[k;e;w Im%0ƕۓs8jYu6O6O8n{(p*&4yN1FՄ]ꌗJri* 40*AXc!@kR+bjo2=ekq"I=b41l9oFIVjdDFAیX1պ JZҖyedJ8t"S-ySL.K)zS56D=Y{^w4e䫙 ;?L' .JHbkmk$'9;%Ⱥ?ކ!.Lj"7lOy`9J9hǧ%BrBXC%n'ZCO zja} K2(/곾rfߩmm6!)rQ#c-*Ic^9~f^\yFd 4.Q 1 £>FvNemi:޶0 K.v'\g=jHh+ 6Odp^QЁm +&LvN[7GCp2V0sNQE^]X!- c5 S߅5\J?Z-G]4jUNo*aJ_7LfH%aKrWsUji u70@wI'}A@ٽ#xK]51`|eZC˧.4^8D T@JH_?ev˕>FcAiTsۧjnhX%O}0Ou|WЧ#I ljv3'H WDzz͊BT-H\I=|$5TX9? ddx&rփ\@:qTSk~u! ГV^7ۍ`zxY+6z+r +;]3MvDE\vFɞ)^LmhydU?o| 3e+)"@ *tm)vF+U~l*ZB&EH1'YO e7da~ktiwZxZWy9z5P1& v0E028:>͐sz ?Ngw7=|7p##G PYL5:jjc. lєA# x_9؜ҁ._Jj+6O_өN̒Y=7ɡWM|,%jVOM_IW9e2Wi#Z{Bz{8+'Dt DPO@L8{ '?ҴZZw*S,+-lyO? 2E4 {>a…,D4o _&ÿL n'zpWB @{H4-f (^ P'e:#χ 77y Lvf ;#N"e*gYd):U\;_ij%BNU' 1D|c$K02 # 2Α) O.dN040Yi o6B$U@8-pQFm)2dDHTʸ1GF',U"Q[Gnk z6dzIYb·N*^۳6Qd 6%7zΔ.I@6IaRށ "J26BK@ 4C {To+mXkР~ov#R] \ظPQv8nyqźd/RW|~1@̶):n5z%ZK$cԀ\{(j6-E/%/{թοy͎T苇_nԗcRtWL?9LGEuU)vN> vsuE[|'B1_葳 E ֒ vړ7Wݛ J-n6AR@YARrÜ܄6ʖ-BѲ}WU!RU13~!5n^˦販 NN+Sf`lP7fKbZcq@@=Z~PT [c3|sbgo/noTek2c WO:: }rnD5rd0FXAuʑ1gEcyE,j~YT(;8,_%FGehTpN/j-VyƬi'V$oWSǧГVRLO**E'P lV !8+LG{+uT[Uxx +Zt r ȌBzΩTckC!ԫRn&2*%#J.QWƱu#d-of$֍׵cF7o Z0VR¡N0dD/Q(TT(GSQU/#Z(-RJ#~DI)5aۻ/WfZxQ>*$(d+@* _D\Yi^F q2qP؏#E"(ckj\I\ QJxpθոN-tV5-G_ưEM󳣦0hpJr**PbMl<`b>4NL\X9\*,sL ; EҤۨ1չieՁLcUeia `/ +EW>w[FO;Q+ 4 (NTB`+<( iܷRܝ]Pʁwl[ %ksx|f'(SVNkSV{]Awwh:ϸj.XAt ]Ѯd}hHKmׯ_m fLR9Z*==et&h r{| 0]vAj,1k.NuPxF(tČ(*]|QpD [J,(Ƙ$$ nWXYP*.Tu|m˅.V Qy-"3-lCuIburZJå˔ !+<]'<FN~, Cgx8VlMMY;6m)Y֠d7LU:9lvd-Wc[K}]1,Lk7Iy@k>ZvFAG,:{yrP jkis8njLPs>Cc)>&?^# }r`p1ǃ;of#,*FF ;L2V=hMWiqD*Rr8F8os=w>4`%x}aB0QtC׷7+ ug:8W1Xs}ڝԥ w7ř$aEýLm<gşn4X?&q߶ldoofWS&{|烀)wSy8{3>sҧGX{W?gӇ =nWc5c*WUC!^UqRUvwsFy%i:f{{Ή癋'?'pw9_02-ﮃMLÏO|.Nʫ?>۳/],H7Yy?S/HNSڻPJq"D4׸݌3BpBYNab 5udr뛭6R-\'Q /D]:Λw~_rs餮xbS in E|?`*%9nAҁP[ř.vt1]pN/[+fun}0AC}3}'/L+u?c4TǖH 0WCAעBݯP_WPLjuu,_B?_皽:P\tR9 8 ʵIBܥׂN_.]&L&wY7zZ[@\kCsUDHt.Pk)C,!fWmQʵm;?ڂFE3䔾k -q$56a I1_in3La;k!+֡%ӯy %-Ao4y|wiFK/,w1^qqv^hp){?_g!$fDGUVJ\xANf~&-pSiZ:ipfgzL91^1H}9/9Oؕ-]Y^tPgI&L 8*rJKCi,ujXaqZl!SyK4l o-}z~ָ(5Q #l6僖h^rNhIp8ST\);d [`,L=8oVﹰDi_"2b kN?{Wȍ/{<|'ˀ9, r&dsv$y6+d-K6jL0,>EȪ/>-vo.Gn7$cr7"Au,:_/iy?o[s(wC?=\uxts== ]_wCޤmt'KeEۇ/J Ggۏ?_ѡ6!LM s![,WMo>д3!SE\}1߹ڀ%>%[Ԁ0bso7PFDJ~ ; D#yƠO+b|eBeIt,W az2b; ?[N\zu].b_IWoY[f1nŸeV[] Qw붕M^f@+(,]@I 1d^?KkIjgKq[Fێx=ڡ0)KGY:2SGwbZ@L-wǛ9ф^7v PF0kLvm DI@LwZbG@OnmmFJQ>2:uFLrSl9)/%=84ZsؖWL|כfb&@#V T>"$1f9ccf=bZ8":1Xt*=pV%M=mz*`0%9qL|gG O2,;b/H7I|O/ajc|;- Mk#m j(*0Θ98ë .7W0 SxugnZDZIFK9W} DŽ(CjE=\Vٳ*ּ( +9JypKfl^ӣw_Ż}i]LJ*,{i;\iOu> ]t^puӮz)oh?@ԮѺONQL+IKX\_o*u_'dR+YoWY\fqlV_wЖ@a  bQF+r-%ʐc~rNȿO&gxInC'}"&6,*iHvu^|u>[O4jv7U5B8V)e|T 03վr[(%n{')S$H0@%+ +)y7)FGӣXY.RQK ti%" y L=Ȭng11[-CbY,_6k鉿YGɃҏpSVɣ3vG+ޣ&G?q]ԔiM~r\\Z /T&T߹(yw6(psyGwDžd(=|Ԃ $J/Z_z㎗jA1;'(ͼ;',~b7~u A%JK [۠0xGDPJK \dsPsx8{8>@麭TŖ&#h&_,n.WxΆC4 9DK}ֽ=rr:H|˥ƳUT=[Ds $)7>]t!)j0j.2Rfl~wSu~ʥBwT M;)=C z~ cH *F93b˿j)H]?Li3Tݵ]Q6Ш֦n<o/+gÀQ\tZY(c ˀRreQ}(纴rʺL:;܍0B][^ɹ62x k;#ݰAhzyj ĠK!JN"(Ԗ_9P`e4Nh@ ykf_Qcc|֐75K4 (,ůmqhM"~W q=ک"4k3 -y@6X(5P 4. /qJcU-Λ܅clt'lYp O.@|a`;,8/CKaȺAƕu> L1>]ag] PXSvSqK,(ݝFF^Q;g=l=fBVv1BWZ=P-;b3]S?i/ z;R@EbGRCԩQwcUtZ}#܀ ?Ul >l~۵ h/rXK;t_er5-6ΐlYoWqɚlN[uۈ-+ #`$Gb=z]rUS:3ƇhƘRƘ. ijl#+􂌆{~,ʌHoYLZKJ;WT{>2K̨yaUɂb*hArJ*p>HLWrHTDe r "XhЊځ B;PDUcd/%q% f6~ ;OO1Iłx=:ROGLѮhS?v_fssy^?Z24Z:]ԛ"QfqLpUya4в0]d f_֘0v 7ޗ܁-liNj`*t0"S:bJ!PJf`u{$ݾyG iwjHH!`5|f(PŶ=RA Ր^#j¶L0dlANd*QdISC,sJI NZ(>mY^dw(~qΉSS;[I%3/rbK]Im? J)("JV_HͯF!6!02J&hA#zpI=/?/j2'z9锴9N֞ эwYtst0_&߻F=eTRZ"c#u<ۻ> nfiĺM)us 2k!+T\N4bx> ,B'EL0 &ZcQ6Ck4K\@bf r}HFdƦq%\:VP_Z"IV} "&/qdAy;O@N\,VR>Tc<7`ƟVmVˬi9wݦJƊ DЭ6- .js[A%Aɨ0WɣB=Ki{Bhd V|? >򑪂cְ&_Q ulǪڠJFAjH[=;pImh-uzkH\yhmJ!ٝ`d)[-Cm@pʞl:〭k9Q9! AWg58'gڮxjsse JX+OF! IO !*E똘N uۈOf纂gk i8C]jc|Wv0I[F$mY:UK҇d;5A)o )ǿ3m oIVX B~tKݜr`vdBR )vpMp Z3yysy%N/?|Oo.; &O+nu@'{Ş,)UĨ#<՘1 cFѧ\h80nJ l[EXKW{#EX7f rܤM`u$o&#åc7z &1c3/"ܒO",!ߑwO׻ioG.kDDp4E~ut~md] Wq t~Ό'8 l3Zi t\r iofܾ/3y*L0rn(%ۖ)%$Ah}.!ƙ&Ǿ}os#7Ћf FQ NoΛz:t0BLRPLcXI B29~k|㇅ͬ ڬ?ͽ߅/G_N:^y{~?F1+4ބ& 7a(`b9 +51āg;K86\& fA|JD]vލF`#Ni}ed:0:M!̰> Lgv&`֟v\L% ] AEpu[!d,WP b9W{62l  K[ʮG G{R&Y.p%M~èn]u[g&W R8 #aqN5 +QBht\K_@u72=Z)+4XAo6hቦNF O3rS;j3'I`N'CxfWqHKL;peμIg$ui4! )v?__uOԄD1G1.X:JE" yot1Dň&0ASF!\Ӊ$ǟۡuWN / DsZKiJX l$-cQ\23Ic^2Lb/& 1X6L)utaH-8Hfo:H7mpgBx3Z 6nUwUK!7UikCzBԿSj= ZV"jDMB[[K;KP~uft=6ȯr8O'n4{̔bc\09Cozu.^RkfnT11YW`4@ZRQG|.snm\92l jU(ZyVoOWӃ֐J҃|it=+T-Mp C9DVL2mǤ|Ǝo =!B B-;h+6ڥI J[[i^ n>W 8W:,՘ !p5s[PA+/vQ r1sHAV+2WpӘDŽ"L(`5* lJ BkBZ)B~R3*b@JK rHF^l⩵ph\RBjwn]_޳43{Z/Ίݿ1 *XɒX{}]LVdҙaz0ZXߧ 4 YJafK! w`spo?Ɂ_+ 8e^!A2s(% Hfwmqy pz; ).pn^v!<,)x~ysev#ůu!Un7 K@2/ l^ s䖅P<\Vy98i2P#yd*m !K *!Jǣ -1 UjBo3h0Ml_J,&qdw!rRpr\Vy=7FFLn KUiP9T#JaKu,$c2oEQz-V=ϫqHٝ>9Cj&vvlF,1A׉hM&y5Ifr3fbe%=XT/嵇q#1[FPQ.jQ cE"/6/TC-Ā:WF]Q5 I8p6H<__WӷnQ1ƹد(A\(?T( ejL+dqǥ{AC+d>JE[ sXOu'Zqb|f= hZ3Cka3 é(cWQƮ]e AMeʂKKͩR:aQ( r֪$Xch+.~yD-Ū orj:ۋw ,>=ĭL-nz6d  z,DcR_6ElIb7[*f3rFdc -e҉.(hx#(ϵg):˵ʊ)=L:zQKi[ZR94)]RX$cR.Ҥ(PtN)=D6L[JI(t~N)=D6ᣜtR oI܌bRTmӕeg5ii{in@2]\9tdaX᯳ =|b)Ῐ|x:Ao˷,L;ZXw3)~S }Rz)m ˥!)҃UL1`Ȁ s,:[[]R<ͣk_N] _Y LP%2aϠP*2ܩ݌}JĀ0R6}$~ݳB]WHdL~a`ѭ:bͣcZ=e$b1qF:N2÷ ht-3a2ɀc$&τ%;7v"%cABj~l niAXHOYLjO6RtM@lq=\Xb% MĥH)ҭ>難kzɾVM)!:'/3#߳ӧS͈bB耹PQpZHR0K02ύ +E 0|'jc.'e[֠ϴ.ڼX'mƚS%1s-%u;h"m'X(^Si기 E .{κH#J m sԦ'+?R?( lz^[sǜ@YxG23S`al|e$u/,^6f*9NWe{zv_sFh@je@?1rζ: \]\bX 'S2r4H1M%@˩$1{ۯ #zrzΣ4HIcgI~\>ڀHTCCjG 9Z3lZycNH4{UX ֑11UgKÿ-(GC)vp=FE|BR&˕ra%lNs[|9qN1Xl5C*4}NݬRx;ANAVeyY yn}PZm 3 /Ѯ0HҐAN=(æ'u,iMPT aÄEEjDXpcH뽰3} !҅x:M6*mYD{4+ =B \2nMpb``r =ƍl"hS.9J.qR\ ZtQ˗߮o7 3kȄBL )YiǢҁbʰ۠|yWߒxwH")8irMkJȁ٨iQQ 4ɿl%+G~4)\ uu QWA.㗸\,M4Ndx3/cEO^E)Rz,Oo.BxLsfX ܄( s}~F4&玓|x_|-bUomt`jX-46d񔾃m"˖vPNr!50(+^g[R0O·=$bl*͐T}V^)&K}[ftIaYӟ;gƙY`0{\1KIm5JIcO9AҤMo򕢈ofw:Z@Ȗ7ކO[rpН=+Ã\Q G#M o <3?}m+~Z]y91}cr5mP0v Ջ<07“MX(5Y:{g~2qtJE8V. [Dݰ^mÇ"bޟM?BCrVP` ɹGTXk=jn-Yo\و7\ob|Y9]ǭ|Ƴk0JgҾvg͖}2@cjahãBMǪung'l04}hBUq. f/Wj:*ƙPQ>=xa= KDB* ČDEr&swo>77o" oRi( ӌa(33Z0 ;݆i<&+7`Lid( kRn )@`2ȐX,x!SD )  U-Z{?Db|VLH!J}uw sZW!~g#d5]厾}4sfD=M xqWUL<~< by|_~ SaQ,&jnkh(NJROc&ɳ S{1.=3l pVxnCq)?yds F{(<(&pcBb-w:OnL"r9I;~ioN D!rp63T՜82D"9oX"o،0$ NUia/|p/Ev}<qkP5&q@ll/3BBXerfUfrq\3gcNTXnd0ζ>Czazy#.1}4FiQMs2sJ} l1$Ϥ򆑮x3!yGU*ݵyZ 'i|@hlu3{yFKQ]էStj m7"AkKzEptOfG`\˿4[ⲷM%㪃O ʀo_@# 1C&< h3$ :$b}cMA°7ǻ[$1'QӐgv=mL9ıQg8NVkknFapk4Zf=/SI%lpT.%'٭(J::Jҡ-5h_ V+`n꺡_}ɵz<8ޓU9h_ ).hrXeJmũ$IoF΁Ue-ENM#9lێ5K"? X/x<pA7l:s)ыLjuѼ- Ĵ}ĒL^!Ti$X/QY$(^cɓ  D;Lj2-VFEvhFh1bf6 <ʇ!a*`cy,J|R1N1iQyiAYQ#￘v콝gՋhy2T+Z&%+!{! (Y톋h Z9я6⣍ǧ'Wz1:(GIHlZ=A2Z;8_o;o4X;8Deg-[U=:(B!%5^%iBdia2Qj ^(AqZˌ{I)\y=F};9oϮsV&TgOfqqߖ`(2-X4pɹg?\_rֵ4Zk(;{'x5 I\XyNBp`&NX}\oHTaRFU+~=@9 1^(ԕ*lP P4Þ:EgַھwVt*h2&_K觏qL17ٛscU0:!S!jLgu5b#DMYqdՍr9@J "VV7$0TZn&zCv]V~H UWFE#Ue ^E@'TQv 4lre'+g*OaX+O"b$Ӵ<4Z+C =-Ѧɣ Ϋ?`fw?顋dm7}iA(A^Zf/j9N{ps"kE'g-.t93AtP._>u5HV{ @B ԩ{k!IN @ЯM^/K ck6y kǻ$,{sJPrq{ k%y׽H;ݲY<s@jgHGaSt;8/ Cz䀔asNaЍ2Z#dAGRS{*֢'%xܛڳԘZV(.eCjN B D2%,~q]]J:`ۤАZQڠ=;`-F)%r$/ۊo#/iax lRYl;03ohPpϒfZYɉ!{En$O3I!n&5z2mpƴS/DIdr%c.T8ƀymOGk=RfG&ЂW0n9B%c=MK ق'Nus\%C4sI&b+Rb=Mr5bRJyNl9K"AJx+ըXQZeIƠlq:'/̃1̑*GU"DHDZ}/ia ?80SzK p]yVCl#3,1y 0A))*a0IYiqA S|Jpy}X<~?fߓ.O].f_e'Y6?^y[wJI@N7_1gG&4ҍ'0 ]\QH&5i󅿾k:T*EZ~nfe;#]x$ڔRI3TJP܇JwaJjt~%/qDқ.:j3?~ c'w%=Wu"f-kb/6s)0sȅ!B5+m%+fq_`HB*˽ֆ\\]C[ ޲>yX=T P 2ҬynQ#lB^fD z[D(j; Q" VDIo)Np1ZGC^p* yjN3g/s´IiUO3ӑ4sִhD=L+cϽai 5iw2~d`οjO#J5rZ 綃Nwji>)~P S6ˏT׻(kҦa睲6w _IB稹If^iT(33YG2^F<3II BK BVQ: k$yTDFh(4BEdXݠ E(ׇ?\qgV3B-׿6wb&H5)QJBT*hތ&KM4Ǝ sԙp&ؒHkw6RX@s8>#ah . >#=rBlK&4 `@{=HXPuH&YH7~",yxdH$D4哑bxpo&Oe<|ӄr]fmo&m?zY>t2giK v*18*;cGx>ݨH6h<:A[#6 жM<0ETO=`|OOb TߡF9pLnF[_8~l=*{`˫)i: aZq訆ks|Z2t&[^._-^ͺ}{קsczTR1fj!N?nm^V^(+u-a4ۤАD}4mFJ3J M]J]O( 2 C[/!-nڏm}h fzE6gUDG:=J_ϲcdFN2}cіdf.];|ByH}qhFw=\QEH䀹} uS>TY,umDJH+gu2k ApvM^:j7%ϙ<`8![m fPM:nqVfQj? w//v:<ؔic14P~ypVHFA ZsesF4(d+~7x.֒ZM=p&3B .P=~1Qh]_zsFشќ y倶iX 9PĔ@~uze؞WTQ|CxuM E,yV8W 딘ۋUw="M5TKXQ%:8MzфOЌ'Ei-;J-"ߏ ^ T9&g`Ĥ͟TyޏyޏMe]4 -jUjh7!( l~a ;ǻchBf}_ncЛf]C ˷P{2~y&?*v@!T0a|JJ*6E:My+q@fiތh #zCr氋BO'YtۆN-qvw%8aCAFd(,䋲!^A}>zZ5)tC?,74#)mL6+P)$%mmY;_(מ\bГw5&w?vXttGuɘ3OsƱ hG\~41H-7 ш<ˀ=6T̝W?>`aKF4еdk:%GJ3Нk6cdtb,P)I`M'f,Nl]]fлEyf* */|Йj+WKTƍxzCXHRdS2urI҇!N3RQDY(лPԀD\%+> yVE3Td(PĢsB83; X\˜;?Zv$Ɖ)76S4GPɑw#׊,6lx;VfЛ$ M{3Yɾ0OfflM/-(Q{:F2A%\%%%x:[.NL~'0uYe30(,P!T(# tTDʕLPO%:ɤٌլIg)1؄BZ+ӵJy ,8}lg|iɼ`N%".:v2{1C +aLz( y5+<_:%l&o2*7u6!KÎ {MmSŋS#Z?h+ W|wS~PJkzV:1=L0Oxot`d/N&wvk?rӛٓC)Ϻ5,9tdނEK:+,=M>1kÞy2ޤ)64O}4" 8J^`#0sJY]yJKU2R2eAzb6<;b#+%f=k;rMhzo' 4 Yiݜ F*f@3 ^u$\[D"0lhuCT@Kٌ[?kgY| h"%9gVFlU=Ɗ^gq#x,7GC\xy'>a =ڕ|[$5znZoO>M^FJ[T{p*Cv3orVzU;߈)J: [X&A3Ic\ (͓֙A`$,!9tOY֔t݈iRtłER$bJ3,ex*wC`)X)倆<\Cg¦Gu='K䁔v๰fk!w'onyW=T]zh"z4 C'X0L7]$tlEAu4Cl*:m\2jQ 9Y*KN\A p=`6r?)z? OY{2E$OO#iK=Y~+6CD&H3b+:YgF R/LR ,JZ&ܻ dS@iPNe&Q}΋v5zL1ģE8@GEJKdw@H2z#)"0^^:qx/T#ёj%Srt҈ζ>!FMF-;";u2.);Kca @t@Vj4G 3$7ԴZi}>gu(&eE 5c%wҩ”B;Ne?FkFO*Yq{bIPJ\եQ`؍󡴱PbJ9A$L·3G-/:8^gBPq("j^x%)\ q>Pԋ\:kR[W 9,IF_8KKHk4#.H2.R!Mb:&pKW"6Aq?ۢlеբo?[ g+&M!~u:0C{Q?d:_ПȈ~8agO;Wn2vzrs;>BیЀ%ۃQ(sofq7}rMᤠ^usv>g`1Ȍ8`@r|ذf;|,jF,#iZ RT#@Õ")X$& 9fdpD:(X]wSJrkԲa!-!vmSز$7$K+ "ru+ B e` \rE7@M0 j;e˲DˊMg3.2Y@. /2+D^iZhC 6cJ!0^["k\E]X/<TEhUZP`bhQ$Xbݡ]Xµc*|a]0&~fWt"ٶ\5 ݔ -99"5]7Lk$7_/'$<_/(If>cJUuL8.'j|G-~m~O!EZi^u{Q_TLҶtT1IF7_ŔAS=fpoǿ@K_y^xytԲ%wT/Qm#>jRd5_fXÿ(}a S/!6Z:0jrpJEB{R.pI0<%0A, J" 1MZV賸ZG9,>8`ׁb@)+>F6H6l(П&q#jCQ6 YZ"m"0cM$KC 7'bP(5jQY57%I$Ȕ> 8i9;E7Co-M]Ov',xqgٳ>&@#5}JwmС_2P6#|OwIQPr WRtQݍl8h%zE:~G%CҰ!|~)"G.'w*(*!LUG8tPNL{p0FkY,z݇x+eevWE)BIQYk`>3S3ޖI.$Y_okb~(,}[ޠ7==0e>M?5fhR+(Q>3A`/6[*f١7jgrXʡ`a7^G r-|HՇTL]iU!L7ls?2 puBvV}BXF!EdNFU\2B MS_R>`^1s& ,{0JJt^ J(J(5^iQ*/A2 FE0 m`FXc<^jI%a\j9ѥE:ǫ3eiB`J%j!t+J=B)Ď-,Hk)lAA@r ,Hk6Ceyki9p$h_6HjBV,0qE4e#OAi`UQPڒX{>x~&eAgZmֶD Zr;Y+Kc}F H3VYi *ȪȀoкmdMИ5#e")F4hdI u7:t4.a) 5bFGKdVyTTi[Q5RZ̟vC0iD|ʼnC\?~3۪}`;0ݼAf|.Hw] $ͺa]~|;ƚgMr<͔,S; eeOMt=r{AK5 fcr2>, wZ;Tu N:LX)IlK_lOj3} 8Pⶇ0c{GM]}믞Yy+qśZד\7#*/w7ޏ78t9zp0d}֒t9LI{"oߠosԬ; Z[?Spj5*ۂpg ms}ܹ) HN5PqHC Yߎjl$j^e֙VL3ڀ^n`SX5/T۞j%TQB^r@}ʏUcPoA&whGw!vr> vm5@Wl8K:-ɵ5ddwꃶx+>ݼ#r&؇S%^Sy qOsqzOt1c;WI Zή+gI$EL{Kݴ}nm1h})xfj.$EL{sEYnm1hEXvoڭ}TvBBN\DeJf)YW!/f X雫 nw_;jXu_>+knHYuWc4/fc"v3cNkIPYH6@4Q>0I@+˪*}R(r{sC1!x])Κ MjAd,MY.i\W@~~ucP ׺{Z.'{/ATS 4-= uFH&Qv^+tݒpwA-?311_ڽpXP [>~@+m5'h 6k QX]E52Ҭ.Z>X-*y.h}2qW{GۡC3mm.;?B\=C f=p}Ӡgco`(Bu %NZi *~dnLd4g H"INRDl!0w܆jCx3Ub4Au$ 1\ŵAơ\2<: hypՒh@bNTZ[X2Gnjd^"MYZjjq$9m_ 92Qvjk#.))iVxnQïfTڈ.{`КUĥ9>2\J1 s*E.<)5,!6V`= T[⤇@71JOҷj:toZ9DGN5uRdc*cO ;-,t>K\vkpKp, [XR JmkuՐ-WnUz?<^/nʼn-n&+Gf#9֫=g!z_FyHRonkxe]ǟA_)|jKp);u"I)&d5c܇GcX ABpi`ي2D3yيR[,8CQiTfS2+t{Y+4=^tRer@$cµzr S$1>CKR Ò5XtÛb.%Q{)S6XXNpF YY8 71mғ!-ĐMJӍ67CXFSn ,Qޓ:qHP1Qh8K@j{4@^GZC  C$IR,qjLr/B&BsVZPZ0dwpʁyf 5豯Pn!QYn@4@`?1`hRjxvPgY`FC)ȡ(ա{ fMLIKUY{y2]$3e0}vndNl' TTEH9C, Oz\ xVƙY*< 06Dc"j=`W /PWM n?X|e؛eg&=uMQۻ2 _ʁ/O' H%%BNN%!.W/'0xe00>DJR,7Q>4Y"&~ˁʲ̉8'&'\L!ows=0f|F?4u N~9}~s:q7W8:n W).I"˩Nफ़//nӬlD4Ǧϒ?Ն^: \KWhaH].Hg_.dae7֓/N7VTJŕ$" *`>~V$czC5{Sᴛvzwi6^,\oi}.)eANs056i <9WᄌzWS4"ųN 2Xt.Q28hfs온Z3ES#9=z+P#%#QPMD-THuD C9x:M(J =`ÁӟoW Zi2 nhg DM鶷6j$W%Q2fd"<\&sSoj8V,L!{Pf7qc#_2`ڣG?2XYpXYAGMz ~t(4dM5(TTFSw؅& \1u&OMbj4dhƒ7Uw`݁-48)U].eΞ]eHƻxxu]7>YT-;?9eQ1=so+^ %9V(9e1%N?oT_V«_d W\) ί\W͎.ؽxC՟u>eGHQ|("+Z&#SპqJm$NPs И2J*텨G. 9/c;/@f!Œ#j R06 LsL^\0#IB͌*wj FKtN:whJZD Dp!zHb&e(CGi=-8s(ɺn3\AVU pmH-99F j H"rQЕ;r,˹,ΪrR2-EVr- b% ov(hہZEA>xax'Kxcv%-nrRRi뜕]߾qN5},6(JCH'G{M8,١2_+k F)2wIdh!FqsB،IE=S6mk'*hr߮C H;o۫ܐr~{]Rn/&oۻV?=_f|szgxR"~{eeeUSϑPkt-nrtedW7!<}iS([)NZ_ ouY Tՙ 쉳LYЌ;r=h=M;Tpw,9l7bg%?8ZӤ  7(B[c7cV d\D LjUO~(\ލǁ}G?age h7uF$Ŷ =vnz]v*ha]-e?ǤDGWˡŗg16(6S{SOoOixՠi_?Wog:ξPf#+8=8pp 1JS{&z5Hjȸά@338x2DP"q1<{:E ?C$g`Z3ƽoOM Mϋ|~q`w2j(O.e2~i^ef~|buRrNꢇNǡ7ZU}5 b;!_$w *66FT,bQQL|N*g5cr&aDbKD)ë-^-ٵ1ZW6 1 uM; ]) !V.UE~eWUÃ5bbd#o8Fuׄ[ZwW@+c\N\\H #hq"p4{ m(Cf5攚7B{i= 1j1NR=5`!pX-=!0pbSys^u;F1tFBug#nSb[M]ȸ>w:N,'@<:yV{Ɛ(hdXC &Ry%&sZ#BeO7isQ%SpɸuNdIp3J(.G"h' JF=w) p8jRL9S4It`wrS$A;LĮz%xQwmY:M6~h+y)՜^7^iM{bɻ^FogAMLDjշq]hi2 -'$I(1騄@el 6˭$ iX (ŏ9n,7 6Xi.<$k4>Eܛ-cD_†3G\9A( r"9%0*u HH Axʶ>'Rkc&p1©A9h28£΄PR lC=i 9ڎ2^蘢&QyuUJp<Ҩ,-8 Z`י.q[/Ԙn ZiNڀ,4=bkn̺Oto8ً5]9hNrLHQ)pe1:Vsdi-Fœ,ӓpt=;[,6$'x-{'d5?w|ZY{87' )Ozg&?|$W7w70CjPH'92SA}AL.oB51 l I5'?{Ʊd oءE`"aoe+V4C0,K"OUWWuWW]_V=/PcCFPfO;UXzW=ba 'A:M˷GBq,Y h" *[=O235%zY[)4KĐaǨ,g[=KPdY"Rh3W)4yY2gw31PђKB HjI, J<$,Q mIJ(9+,0 b,Ӧ j@Ӱ^S`Qwuǟ [Z`L略DzO"I4\%-pYFj+hDma <0HZqqى>l8`Wz6/Nυq B!NzpŚA=ZfLڻ+peUH|WÚ82J +thtG/tR5F+tM w3q5oa l=jPrݚ0~'jZ_N#I|3Kл?}}i+Db:IÖ5nJvG};y0h.ܸNo G}awc! 2?.~+ip AH~0sU+y Vavl3bХXvzmg./ CF[cQmѳZ;Y{׺,+d$xŧb@V&V4@kmZeQ0RÌ$d)+YVAg &ij܀t^ phn,'_7WoUOp}7b}"\~ lmh PT6%PNRw[LCb'SO;})#nse֜jOS"!}Nl6;+x&B\VH`QjiEp:::?VQiekdLMXYZ8?7o'jI1f"LǨ*+B; B[+WiY,v'YEXYe!˄DH}PDcLH&uM&%P"[;,;~ѤY6PT[(?+j1P"i ١լ"Q J, ݩeŬBl}#^IfJrT"4*$Ϫ Rcɬn{3eKs戌gU$MQx_{-ј-h6WJ=1hߖS-eL]m?šc_[S)z-}~ [ɅlxErf}cB+8+}h+}+JB뭊TۗR 5ՕOqlVSY8XOl%sQחڇ(:5|=i-b~k2sQO1ۧl3ɺft"L}Ah,ʼ}K4ZW_)]YS촙x ;z˅В]_ᦙP\vi[C3jziXZ 8kh\٘: .8a%O8vSXǃ/K_V<kٵċ,i]P<V{g O@Y Q{HsR&} sJPY(3F9"rgEFmJ$*5wRr'7*RSqNd$7zYJ)wC5'o$)칠#3.(V Њ>Ywr7'=N(h xX`@nD} }uWj<[$k-]YiniVFyШL1/q-%@5yx3<#J…| rX9m8lY>QnQl qnQwbEoi5; o=+[BmxOYۧ, mIR&r6j4U59,6Nyo)UC:TD`.܍s{?82>kmwL`!o2'A9wda{1p+}ohMw+Ϡ32eZ}dLV7ЙiY:o|- VX8Hn]ܶ.lln[ Gh0cpJJ'H @+B峵'9[k4x*x^A"`g>0H!g6<'(ܠy[2Pvw# %rF׎vE r%%"gy+V_i֝y((Lrtfkܻ'+ T*^aZ3Yufpf zJ2h._7Ȟc.C AӼk^O vy堅 :Vm@=7*~~3^]FOb`_~&v/by=?U79:МDA2s2%K(L9Z9SR06ga}-#mH 5[skn F bh41IQ Ix?rdpɒWe!rq%u<O{14? M-iM݄BtV{~,k-^`M*RS*:HT6 Wy͓VDj8c @0RA:L&G9t:iF?Ԯ,gl׋;Gkx_ٛ=x_\C5 Zxs=ƀ$3c>~N.G!Z5^+^s]%[5q-.µ\r5J5*>Y+kv.$qw,3QPx2k '7n7{'zfr ֜lW"& B% ^?> $6ԒXlvG3u=Bj %vֶ>[@m Qt&i.K2풾"y:: i@d|S}/gw^ J1]:4,${ꁦ\w}iE%TNDWI4J0"ЗeFzoskD"$NBe6URC Fm| pYbbDgFpѱoJe#B'8(`h&-q229uvj$Xj!E7BzhɀA2M-1gd&0c&sh F+Ce0[q 0t!(8଴` Fۣ%4kDS爲C[G\HQ0H<1*p .#G"Mj"D/"|E".W)ybTF`0@I@{TQdq:g VE)Zx#`t3ͬHdбѷ0T-gc`:8sFJ=%&$(_BH`r)4lD)M cKƖO"-rMiML/z|#C*9WtMmg}p48>c~T]4nEW~HG=4I|ٍk 5[q~WO,th}Pn*^Ov}j},Vb靋5|^o֥_!'aL|s^eh+Ek{w)A ';iTzrC# )?(%xIo櫀4jfQ[/J*K|(0Rzm3yױXK-ŎY;-jŽ +Ͽd$ ܿP؝PE ~%giLpX4KO?~1m $ W MwNSD =I4Kғ4N{+z:Gc)Ş։sN-zn{HM$bpsVfyZsu䵲$ʸ܀I"NK12b \C& :8"AH"oZ蛿BFFjorw[ )0D14_ _Gw7_krZP 3Ogcq(^;=}7lj"y]P9\ĄL>ġwm~Y], CvXdb_va4/=cO,9E!%ݺE6&cKů"M8t./m`?<(oЊ\*1.}QH=ݗ΅{S; ?|ZW_8'bgZlJA 'Ɂ) _Cg0@[ 06^Asd 5i@FW| H"Q$s -|yr^\0~j^+u:$O'W{&葕*1& #%' )C^Q0>a] u٫N^I!> qzx4K@d@B^3jmBi32K[( K`쉥-AcY*q5zJY*`+EO12EScV#w?io,{7dM x~ϑ$.Jih)URZLIĘi*QQ0tgLNl:.QBRKFGjijӿjyNFBHPM{eso}Z$[꿴˓œC0w_~+)Ț4㧷W˧Bnʯ%F!{Zu@^ !Qϣ/ls? >zNxd<Z^ecXvڻXKtnb# -YNP[w %a7ԫaMkt,Cw12J_{?kYRX޻eMYGr&cS>:݀.r11wxb/z[HօfT6UFM2<A#ŻUK"[r&ڦp/mֵx3|[OۓNKRZs|΢-4T6J\9R b_-VE?~@%*@n??aȪS##!Oj=Uc.@k Ӎ鞨I(nc[Ǟ텖՚4%(P !Yk"r *UP+S~ߥU/Ga5G/Fuj{IU]"DO <8B))@"@)Up2F 8IZ[`ՊRAYM dVtDžwUt<;i brMKT? _}3'$KC[ၚԞap iP/~e _.n{ Wfu{zK0xx`蹃e|M~CK$BsJQ#x|1cJtϪ#N cMK;+ʜH+ qKme:-c,P锓D2_Ah1 7ɉV $eŘ!"GӪrR`U"_㜓U@㙸 u9vUD0ŔH*,*V\k)ZjLL)VRF'>W<`aJqxSJ2bgO;YJ|'tAEJ! ȏIOb"uぇb$|'Uk UXX]KWmUe,=~CuO !ۋ̼c."c.ŏ?ǯޝR.8ReA VQIU5N+F<<(?=9!]zE) ;/B>}J0ir&5U~qMD10Vߍs ~ku]zO0^OO!ZO?~?rtptŋWo'Ȧo]k !m_x{`P.kj],uAq{>X|QOj1#? "ޖ+]BA]iU@eOLۢ[t,6W;F p {(q_ u2rāN.t̢vPl2aE+kYOkEbS<cs)|xT3Λ{bYlUZ#N*)W@F"W$3XUUf,˛?Q _jq1߳ot3t`w0k4[UEP|q-`/S7a߄|fM}oCxefV`-c®e AQ;ĨKAR(,5fֳk]`Yl}l}EaF `_zVDOer:ᱏ<]TWD( uD8eC%+58"dTI$Fo`& ӯ'A@8GM !`MdRbNLLP9K) # ըTJ-P )QnCD!jNm3hS\4H&{1IQ⾛ F€`6HK ci\# ;˸b2r8F4hG,ZLeғv9@Js"#EwLغpޅq1waԇzIZg"&P5 [ydm7*U8umEw]zxX:¦hSi^Jvi(KqY:gНo>djGM8HER;:S9-K\HTo]c>|b'G$>NÐ@Ge JLȑ9ܛS vMk&ٻ]tj#1ONX>~({ l50Vҁ/#l.pfr6RA4SQ 廳{Ll>oئ1) 9V%!p}A/UބeH76˸nCn2^)׽=Ɵ$PKb8.rUA,.1(5!.|H̲ynY ~dW ^- b (>$AG]-LYhx\$U`:=tAL?M xH'ꏤb/PeEg8 /#RqJ46wŁ7.p x?D s"%5b q~A8IiR/6vl&MK$گӖ9! 6F A!3G0*-BV@XJRe|>HPG8tvV/Ĕ"؋/g+uAH1 [I]V _Zo=`wڑ' M6B\Wb"2\OI_ U+ߵ3۫:Cy-nt|{zzKӓ_2,7o0s<ǀ%-|oʧ훻|[ݿgCy7[O}?>T ǹO Ў"o05qlZ& FHiHɉ3 qKIi$!ۊZV*G9'ܞsnjb蕓}(epG.WzL #1>8r@,rN\B&H.9!jy΅Z]Lb ]I@c^> Dx^Vz`~^ԭ (*N2᪓i}${5/B@9ھԭs@@$`0c]+MlIΨ)$nÝ?@F8"!Q)^w< 4gMO½plZN*x4 ho偏2 %*TЉR^>3^^>_KYR {Q4{hcAcDpH$8o9(zj[nSRZ-= *&撚b9Iei++RR.-my;dT5څ.O6Y{,U~?Xw5EAțD VFަ NƫHqovWK($nH󶿌ҁ3n0re1}`?"KF dݛRvbax4?o74p DD 9)4eeH`BwlkSјW5|U∍RQc^9JE~!hʕ6 4"}jy#OP?qqC9i2icP1N:$wkl I@8JB+@KMQ Z(…`)(lDh0k.=͎@Џ?~9 LXf_&DUB˵A#;ukJyMEpV{aAP^Z@0彎ΚSr]KD/ Φ|,} 9H֤S]WŶEB5QqSFrr )&ìrnqt[RWtr:-wo;$.(Ap!,WEjC2PDVCSHYԔpA:Z~< EgJbjJ8dq5ӦsKVx. 1C|*sO9q?׸?;;/0SsyLȓ= d!XM~m"9WYc6b ՟B׫?p;1֊"m~ ӄ]xࡄ NK!B'm+9,2,; 2KBƒ(N0=$%==lI$:e=.Y3R V6D2:9c$Z?j:RQZzR 4WZ15Y"e_eZIq&tݩA[fD4H4Z&$NDҖDJQOԆ:7AfxMӻ-۹w2!4\ ` &Hl;Q6fSrE(p\ s1t+2'3T3ȻuP|ߠ0/t*w۪p Pֻ-CLwy}|L.PE2&&b'Vyé2 6i(C=w렚Ѵ88s>֚[ڏ w.+$r&Uu3zcåD8+l5KJh}!!&SR|/lB*ZGq<.&SٴmrmPH6]>Ey/ht]kC篴ڵ6$ngEEz3Nk­`*\s=%ۃ W2tBGkeNjJjfvB++Q]>9W3-ǎ[=5;{RjeDzR !%D #>)ס.R /ԉC>:ZDly*5̺u(8P Q/\2At#$)xFoƂO2ARM*(DZa#KJ" Jc`9"1䁕9Ȉe-.ɾrM1$Hv Yʅf)F!L2MbV%/U ԋ7Z&lQ$K(l^l^HjtCȠKC> 1סgqj[p ΫyĈxwQXv[}8X]-xPWi^=LןLG?^aLJue47,dLb_/gOlI$UnmCdUK+6ےHk?J*.hU|#M uf@YEa_Vtg|*+>L?uh%3FFm)@Puol` PS"v%k:9c7Y \֑{s&dQk]$tۀ-.vth॔?ϺEMRb`xʱ*X ' vp1%kJY:lzcViq:ͿTjZ'n2V6jwC֮>1w\8S!,H_/Ox@.pº qZq+ߐ )|_*Tc#c%<A^:*jRȤWQPuz4('O+kui[JW07j4^:ɭsEFA[e խm}Qpo?@,^Z⠑e15qF͞ZM{+VDٴ&oȦWE^oBT)^$hqPL-WN e@ɤ^yp:yMTNX8FR&/DQp+BS "2&\dZ`>֡=x]O/lr.WRO ϖ_W%bG2zgq:}fz^N]~rŘgjViOתR7α<#t8[l~8zQIx{r_ E}~$hM\i8A.J](ҳly I\DEUɛMnmiP":mhig [{ [;ѽe1a|LQ+GuB .5?A5DsbHrfKz0%4΃ǫs9\ûVvrӈI0`8ia,FGI 85q6hvtԌ8{C?PTrvv.ŇŬ+cn-=̽I .EfٹI}g)X]/a6tZc]k {NA98X="^9U0Ž> PxwwB .Z9~%H ;"-r4탎"(܇KZJxr; @y˕IǕP@dיK*N$\KFص\ Ϩ'LP11+Eʼn*kATT:TWeȗyQXw]xW&nRau~A%ZHL5̾u)wqcos繽?17w5yn .zy|^/nm!O}_hgY--Wó uʂ>x,Vm?[_k|(vy7} :8 [ʿ䇠keMnK\?bYA1iEͣmšnf.k ,89{ F- Pow{[wExLրB Wps.'sJ@ 9\ۏgˏ8]T!%C+LSY2> |Վ96J%Q2C`)26@\[ȩXMwl 'dK?~h)A!5|zR1m?/NDQ*%(4{Nf oykB80 >Y+ &V`TMq_`M+O֧ Ȝ}r~W]NIg E"E1i]" tY9֣&wK> 6ԴI9c7~x%TRPPQowŲߐ) 72>pg9+8E)`#:x3QC} 3=lb&¸3P  ڄF  -C81lafk AVh0CERAo~8`<&FC; f Wӷ PBRu4ٍj,PQ%T+#,y#v$%TMzK&}Uzaw;Ѕ/MsSkWS"EFvvb ( Re̍oIM v7S^dD}gC;޶\AQv#v8#{= + $_`^%N%]A-FzVEt(ԔNP'3 xhOyն}oq b;9ŀw:N,e-Tω' 䆜6|bFeN~FW\κ|'tΨ}_88Åf^*/B!:0D]SĸtW/.Rq*`L5xGƔ 'J'9!@R?P+SPYJo AԊ~תQ* AJzJt.:'IXTHgho_V~k]ǝT"{_s|}Q?^ WJqoN('!䔋jyOA9qDƎys]9(`بDUw6 ֻ+k?X!;O]֓-SJQi9޿E/~}U&RG'juR"r|H^3Qi$WeۉZʶ/F]ŧ>@1ӞʥE#@=KB*3zx?Wrğcv4kƷZ|O^'tT-c2#S8~gßk@SɘG5/ɀ%TnF20Irlsl`P hEQ8냢5PRBKTʖK 7AbL)\~Adqk}7h ir+i %^IH* r2ixI˽JFܘpS߁ˮ-t8Cqu;8[4EnDV*xǚs-ܶٓw> %- xV8+knHY`U>LHcb=+%ͼxQ]58N߬@6&@wIu*ˬb~){3s> v4J}Ўzsw|;n~jk1K_.Yw. [.r_PQXS١TL,8S6 NlʖWf|Y9?kB0Y~yS#'D%`g@El6`P{ez`h?M# }Ux@ۆN9i˙Thd $X="j"*Vy΄ i$o1B\DX\4J@#zv\k͈ă󾸣G*U RuͥCq5%9xͤ4/E !`rKUD׎5CT4ఁhaN[~*7\ﷻ8{&s30JNCMD inN4"AX$zqK dl^&vZ>qrڐGmhI饺뻸LGe*kM%4b=Xn>MF )~uOlR>o^,KWU*> 1t Z-tV'ë~/j[F U}k pݷ{K*'7x8z=u߹՞NW(TɊ,cl+ i)Hm G"/otoeO5NE CIg=#]-J2WaID0 ٱ$T %9 R1DQ$B*x@]$M {)-w5U2M$ xzr? .c 0sDJQ%cwNNLҩ5 vNEæ< ktlTP )(8tnղM37?(i4hJ!%'DVI$R*ITC0/p_JhE]%q nU u/ #ONUX"z@E1 eT$%H*LrPT[_BPoa{B@wA?BIb 5%lg \[Ѩ ;z"H6Uũ9PxcIEahoOn4L%р2ɦ}yt$Ia!oBnfH*]pI !©X3gq(QCZjytMLFB! } ;Ay<F#"xm$kNu9wň.&r@L)@C(f,]ښBuV Z4-{!HfգFo4StC|2Z)lg=%Z]rx7}xsfb[5[֓nWΉt>`cƣ YN΢WQ+!ydT -kQvep(n&d\u݉ nO;ʼnx!`ToNd˻jޠ׼%(F<}RH[Tv{I>ےh:B@8xkt7&Z VHQda>YFD1B%l@ݨY|I~\:c(_|?݌Ჽ@˒ LSFUT%裇%K 3&j@QXUER: tB7>NcTH^cbhV m`Н5"_3 og05Ux ?`Д֑]sӞޕNN1s5JVY"S-6m3]Y4͌/t0YUk(BV5:t:C‡cS 7/N1ξ`Z[ 8YFk]~nUm6ňe |Pp bJᯀ)T.eyU_>C.F!>ˠF#H F Ź4 vj^cލ4EHyRLgy$xt9\[!NoƷ|F~y'>rO3CRUr#=]ME"2,!Ly&jf/lF|*s>fV291y.8ONٰ}Nyp5Sm4> S0g^: %y^ AaҗF}#mz_WOz!> ;0csa0fe *^5_FAW:xfd{ ; cF6mM?%[fr+pW }`)wϔ9"N}xe.<´};)xWFo99yo+1> 9YyE\7?SF}v۷"&9=o*4ZRu>*]Ļs aP7^J(&m-n):|{??˿}sp7{ zoK !(Ʋ!PeRҢ_Bq`%gl4V,nF߆;cX!Ftw  Ru$ !cJ\;G~Mr;* im:mhO Ni cݿ_NI(3ܠS27?}w 0tZCtA^H_nj]21O^۹'81*RKiӄBy ~3hN)܋KLhiPDj(E0[E"4s˼њ&o2N ;t4fip zq8ήTY'&܆ 2c(Шm+&# jr'H4u B#tTp6n/k852MKx.oɼI&fnX8W6 Kݍnv9ɍ6 &JimnD-_I0LD N::˥RL(m:xY 'eUVtPE \|@hy0l˶qm`Ô-܁fVlq2'/MP0 ıga MzZk~H}޼^?^MB8ӣ_;dFhV=_^QLg󕼼[F\)͊?=B΋?lJid/OVц3Cݻ^R >۹z[[>Y~ S2q|܌ꘓ>^u6Juѽn TBs D ɽ~@ ɜ&tl::v@C *o0]&%u z/7eY`wt M%J_.:+|Cy5 v@`x<\?'z"X? .*V{&~V U>!28z=BQI,kDG"\Zga*7鯔k:'B> c/8||"eZL0i沼jM,-ȈRYC\写c4`O'V[r .TX.pv=}9yt:)>-edvt{?Ŭ G0_L7g}N?q }a1/2)bh0yr6$pK0e)҄.pDQr+j􁬰ఁN<|_SG3S@Y$~1hǴ!PijXA܃ V cj_#V"-7Oj,WniBҪq:j1 hʾ NUb+mu1`;j` 2F|: 63'+pyHZ٧83OOrfV3'Px2PS3c-KXf;Z1i//z)"}o3xf4wN O}呠 &P< DԠ Sԡ)Ռ;Qm%pQPVF];_?F1o)X(Bl|\2'tK$^,++E @YB1ܥ$͂5 1#xc4S__l1BN?{׭ O=I$E@<,vw@ImIRc'=:s8v3}H[gDRGI$5!T3K^zOj pv5?QЌ)I}ȟ8_uʇ@~v~>}WVf$RtH:ʲMjk1&1/tߊ;O)V4~ڇ2@*r QjJΙ\\a{NLĐJrA&e/a٢[r;(Ҍ㢊gv5n]zJI-{г!'|%s* |lPI1TRMgU!Խrc[==KXZ"{6##K74j7]ɦ-I+KBf}-lN@l63Z|;WVQ-ZU5l O+6(+exˍ5e,!,`{wY/Kv/heE|gd G5ٗh+SxXۧ,w MbO?|߃TNNF[/`2Z+}J?7΀*LE:?yy1=0ʯjaR4fԶovBɌZ5@PJT3xMDu<|kBH4_Zk%&>p.'TFh/pN^Dqk~ђ([dQ6~} uk{;"H|Jo#z=&y=\}_ڨFyHכCu6IK.MazH|uG)jۮ&l̡{4Z 7 86nꑯ9,,&vS=0:ˑK;޴7|?c6Wʰ=pcEڨ`tH$"it}ZTYWV˦Mu/^U7&JcXP\rN&! B*bA9ߚfVRK%f Y~y}dž}8EF=s|Kzv2A7`mI%=oRpM*u&YY)"F|RҪuEX9|JZAHjj1&U7KZVSeN~H1WR5(fX|YC2n%tJ Vʑ_H+\eɕ&;3SkddZN2 S,I^Sg JYvG5*_׋ٻÐrR{]{\'n g 5of U#ў1gZ+h=bt9΁(UGZ}-,mg k߷*㒫($ 9tf3iw'c'IV,RUj9pØQ'ţ%lA\xvI"+ ]I\L\%S\o(N?P]G+܉p' Ju*%ZVxֻ;i􌰣Ƥ)B5r-~zJ28i((M5=aɷ Oި0lк4ǐ'p{FXbPpKoR!ʂڇTT/|KL5'*{q` MwOg%&o\NzHʊ,z.|O렎n|nRi`NZswEv<`$d]] N{v8)s+Ov$.S,>ZWCtmMAΤ֡KoޅP8xe@A 86qw;tz*# hܩyQ*ʗ˶1zuf㖒&D @~\ ,@k`^{$K℻ d XW}1˘:++tԏvRIvİC$ҥX22hX dVUHgT)Ao5[-e?9%WIUP@6Ylmө(Raj/4rϦ&ы !%"a[EVQgΙ)|HFMdd%AG-sk'K*cgZ'jOv,H UWumͷq@j_Ԍ"=Llm19ecJ%9,HtIjhx-$jEMBMwU-mϐخ:18w$l*hcgv枫*Z/"q0*Y!pje KJ +TjU S֖\Ph=V$QXV5Ju,n&ԭ۳1egQ"u"d$NMW'aiV٢`$gTp)oN,λęLxL|~&"; ^gsKXzRC,: "$),=#JY}NZj:9$eꚆP5sݛ1` j`Zimės|3+u'smc b608n(B(ԷܣviwN޷SU }}vɝO[$C26 V>4v!QXpzt]JԳgRH36dlumP3dߕT)aZ8paj ,x϶fmhUIE9 e۔cD .5$—81xԠD{Z-v%#AmgZBjajkurCȥ[2D\E=H4Bʌ\m%zpAz39jMdG`PolH9:SC.:(ˊ'G']]ȶ11bVyDnNplN89T!:9 KsM4! `r> ~"d4jUh~v┄?>SuƄuٹqWٻu.IΣm9:l΄ }o.ޞ/3W/.Z/#濟[Ho^k?eWu/*o7fw/=v<ƷGnxnx^E;N(O{F4K V:OC=W{;VLa?ig.5"< + M@VP]ws+e=[n/6PBoVTCjhQqoiw~c?Z银R}VڎYG5jM%G+}VJ[(ȏJBb6xOJҚR'/]֤hOJ]g[gҏ"ƿMz|w'ii-e-] _߻~zxBvͽ}':\e 4&́L=//CBv95 9h}8__4#wwNw":yxu2x(~ ]xAZj8t-Sڨc7u o/N.Zm2Y[7t Z N_C]vaQv*NF{Ov휼])p Vvx-}ML]޺&CuQލjVpi)V.x05:eq+ y={F뛼qmҾIj /ѻP@-_1r,%h_Y6jg 17 hJpSPGLU(TqB%/lRVpǔB3P_/S[Y:^ θJNd䂓)Anru$4Uɏ b2fz߻#H=*h Ūbw)sNQ!4 ;!gRa!l=߬賭-o5KZا ' cE'(6\,nzW5]_O[JJkfblnXϣW0砑<-TAdNNV}nmWr~RhFT~KKfË6}o|ZqD!4wqe{6,>y;jwpu5xryR.{9wh4_D-Né%a]]. *ݣ"nUѨmq:8}AhN?S7ҽÏ~W0Jz[o?MՃ?:9Lk_tfTgVhXow^у_\Q9I=fqz初⚹6)B2Xe"rurZ[r.cw%NI=*v%ngK3dhVF((ZϜDx*iWqɿ:jxus_2Wpwf˦\ zD'%%Z@RGSSȢ~ɽ̩$犖{ 2175_nsߐ'#B/zrb>}]|xCA~X}& XWT0roq +K?_[)e 2oC'&fC%TR|= Nr7|.:g ťs^Mi$iZded !V  (goB]X$ Y*ٍ9pEJXz$LPij QΓA#@r#R8fv!Itj 'i ~N 4+ '[ }|#4)"GrUr z~kه_߱$Hy\?~ŒI/dEȓjI׸8||w7˟`|o yfon#rhn!\k}J ass ; ȟ2!n:_vM<ƛrEwH$ R$dq0"0` ~gEE7oࢢ{Gx4; % 4-l}IQvW X;%3PYS[?Lq.^|܀Z@C2Dwc&nOBl͡1dUs' c+~?ܼCjlZT4JI}Gv8k&TѴ[z*ZvCBqM)Mnj n4wtn" >vK/ʼnn}H7. lu8z T¹ `HdOWj W0GE$Qsa=qC 57<蔷'(PR 3 G1*|*tFd+Eo3SjU&ʒe\siU،xkNg>fGI~E8 GIFӯN[űH)edjB=ⲔeNsZ2/h ʄd7Jg WQAD5RF-\qrْj YE)HA} Tt%C$ׄ:R|TF`FON[#ĕj.CJi4{>a)eĜsSmV:w~B,)}wEhZ,ϳ5 iɾ|e8ށ&0V/<6🅿D&VB]V wˊwu~GJ9?laքPliU&0AtD&K@Ny*>HbI=>=>Yk+L-\yaJ) 0I@Lf Yި1| ЦdYw/03!e'Y..6qU%O:β`4|A3m>(1`H/\~U |~~W% vg,9sktm}x݈lz3@70:PwyNW#yu,Dqk,35'{d# K8+aa//)!p|}\Þ7n3"|.co0BHf oM$4uE0y A|weUo dR*(6c?ΐo"ÏO}$*HU쫺d(f7YPIFϭΗ)c!Djeh7l~rI~,HC6z]NVF~?2|3%!)Vco4Ѐ1mK/R󶠎~#)&9  SĬ7L[66M6UjSh@6{FBm-ʗ0S@`{-ʗ-d mwND(RHܨ) e" tܲ:LEO#E CӃv@ 5Z9=$Ӄ⥔1}O9J;ܺ"ṙE GRHx}9BZgI>zhB}SA{W㵧zGO=cni:?cqGi4 ͹emOζ׿ƶ} /GBt_ӣq;}j9(1aGH) 91G#z:GǻV/TzgfE*Kgވ%c7r0Uk#bh-2mv@K_HT/tsZU"ԙ- YW*y1zuqȮ'?>}Yܛw*n§jEz-/2^V351ݳ8=8 (f$`%у^m}{gVi1[$ j=ҕzVJMh5ZʽmFs>.juz䲸H^>Ta,#cF,caؕ5WҌd\RYnhƍZI)m/x!ok-` Au#P2zݚY6Gq$t@haBS2RywDq컰+I ^IEFUj9%D˚5`/||[*U˫ Wu;PZYȽ%qqKjb=pt_$NB!N 4/JwH}y||.1})=|JD&^rA} D, bwT0hcwtTFQT)*k)X ` J0id) W- TZR Fh|WbTݶ(@$WaAu+9bsI@rʩ1vW`d6M ah]j"?I1~a3};Mas ]ƃΰL m|\9 {25;fHOġż4Dɠ0n}EФ5M=4nҘS9YA' >saB "Gw( 6A]4*@gȉ&P*J,0s)3vP"sZP_*/e Ͱ A z]qg3]|!D,o{vٵXFCk{ rĴE~e,JVљd\Xei2# w%'*XYQ^ *qp)Nc󤐞S[esѡP)E-xa^.=k}!wn}qVO䈜tx MA< t@)1˽peAB?aBJxJʳT8R[% ؅fm1g\P҃U#HBg잍. UB4SaֱhHv )PS"Y>:6- jzBp)m%y/ʄȑLAWbNE36 VU0s̟6Ӕ[TkUY2{)|h?(Ո!ϝ.T.g _6 {Kd=Kjra6l.ČRWZˀ4cCt)Bk&8 \.\&HjqԆnCݮvH=Ӥf}^ c]3%K p'Sa)thw Azy,k bo%לSTgB OHAMb`5"¸m8# / "`~d>fahoWH PŽQd3Ƨ2쐷5>9KE%YԻYJ$$F{MCЦ"0Gح^&8Q.[L@ˤˬIHR KzHzBQTy3}UOaG.0RpU=-壢5Nf#G<c mQӵM^[x1}:^#?(l7T3/?vk?8D=r:D*f/~WAA_^{o8ff}ڝU\2<8o=6O/'/fɪR"`Rklٔ ]jI+ ԊZs"KdUH"7p}ߍ[SXgl]%rv ĜLOM՟}l ލ3kpjWaU4 sT:@=h:?r5.%@YKzm\Jצ<{:J7Gv"#To@`Մ&OIPuuN&% &.C{ѦV/V-] oU8 ){O4lkm-̒[X}I!x_~d/ v5|Zr(_,}7L ;O̭1Y$p+ry{vlw&-I< i_(E ?B(l::M0դ~sԇ]+9@T|>."k6Vi%Y:p7$ r*JuŽL :9[rsd`SAmIgzmq -;  H[bH W55:'y4UИWAc^5fj;Bw,SArƽ"gOF)i%}!j0EIU/[ E*B!;.~&LxȺ{VR2C3fy,ک\kZyk2u(b΍v/>\_` .I}8!ݿ̀65;KNnQ޾0omYwpAIk'gá'%j )U1fGӅpq" $8y%(e5cX*JG앪.$zw#p;>RI{%?=nTtI48GK anOBH,C3 fn/P2+{ h,_vm DlΥʒDkjGZfm&\|kO>u]Ǝqbg7ŪG]zpGUϟxQ9J1Q=[q' qvjjOLi<]4G?}v*^R83@N R1W_DAv,C`/`|=!HD9{ϨvEFgOߵfo۠]sQkӻ^vzٖ'̱dWp>-7~ICG q/ ȀJw~% ^'J @zܙK F_ |`R7lt0Ue91i5N\v& ^ﲌXå>:eCө / 1gZM1x8ܱS}m20Ér'Nk 2чS4^,жM۽@"J慃djo @G&}|ܕa+KܕOoK Q4R\a,mp66{!hɻ;UuNw8V g&Gf7=$ie7j~Ov27fc7f@7띗9 Ʊf4?_KAH?m 6'$ yKw1|'ﺎ$vFY4?785; k߽_wmH<회!I>9F @ j2uvIc6m2ue YN,Zer(/1PU"_1eb"?OIg]^38-ŦZgv͒g)|qɤ# %CQ%¯U,(X=8@+'uA^.K Y.ܞ q]M~]x@׈]W.&T?_VTbɷ.\mHa4^Ǣ֨3@bXTmզKމ "&@g$f&C& <M 'v~! a@3 ٯ"巻eBգ0ox@wȬnn0`H:GΤtocf)z72 o3?)θl9[ Xf> 1#ްv!"zUAcդn_+ ΉhSb.B1 } ov4^pw,ģ4l(%ȝe[`,e}KGR fd;I#:"&' !-nʧcGb앒&p`_K])@QzH&"/x\<LKMqdD|eF3  nj1hD3&?: p@'HbE:j}[oכO$WrQ *t.RVd9rƞDn젍01ElҢI ٯQR=cWo<\2iW5uބԖܥΆE"ԱпF a F{4 3+I,zk"֋xRQ]Cu%6ն: /jj[grГg$9WŧcG*VK 7)V@ ]s,J sq^P|5ulU &Bl.%YΪad ijUK(DP2x\܁uw>Dl=*izSn?3E&knsF#ђ^<}AR'9} K&>yuK_vUo%Q/I[X@:EED3ZU-^_X&W{ i0d{AEbTYSs>EG/A}[^Er7>m~߄e(ˤfy-}$r*4ۿn bsE/V};?C)zL̃vܶ!IͯWԥ8 5h`HAL~c]~_f^DU] Fr9F@&L Nm8g3fFtDHW_6kq*stR]2y*E2:HiBA |,9\ kG$LMėn_&ǵ#mP8)Y:2}9 A"#S@:X:ơ9.8Wnt/7n# 1E`9s6%:Q7ՎH8?"i靤CSH^M>q HM:'saFl.ĸcjT ^&  dq?=%TIZkXɉTx6LXRp z %f@LŇ~6M%@U.uH_.1c%,|:߯u $ IPL{[HWX c)gU`{̂V.QXdBFd:T}6gۼڍowv<^+eY+a՟MZBqD; MșܱXT /}\BvQI`ǣSFz3Ⱦn~4*08 {>#."Ċ5 #äˢğ#5N@-sK2jHXJMԃ}"?ӊxBvIpj9/>)'`:GA?pEE bfj蜆Ū4q"vbiK (ΐd<$ )']z8ezq @'!s ”yGNؕv]nM'ȟ6N w\d>%= Uj 悃PN *jNJ$T%kӮPнԉN〓5ъ .e .:8yB).]+Ʈz!}-s`6Վ}fy\ښq|8/"y4%ߧ6[׎a^8 ׂ]>jfܽ&'Z9ronh,#ONi4%-b^T))iX4N3#[g9mXYӠf9q6߮6lsrG 66Kֽ6>;͗EPVFuMK k5Fx-$a;Կ,>){nу{wqx``zy) 2%-ZyQCUG!Eoi)@֋XZK׻`2bɏ) ~kOaIgsN]Ғ"Ʀ61ƉRs޵q$Biݦ~1ٓ8O^ *E-9OPFE 5CIN4zKwuՈjq?[W ".HS֒A\&d 5像 ;ht_^9ǜCq1>: ĽNr$>PLn\D&ߟsP € 7X|*U9tx]49tn⏂(x! g֮/R&ockG :xEDS08ID*Nќ ?m(R$lxҕ -7M2c]ĘIA$ ANcdM^΢hOJ!64eUZ_uLʇ J~.wAxt&LI ]Gy24\6w &)=;ѻMk0k@xo#lޏ\zIY- HN)!+fX,C$Ȫ(-,L8D|0?.糱sN&&ٗ\.d9"W2FoZ7#¨b}xbw `B_N CB #(w DLjMOЋwY*o !L>4 ɅEr6R {¸S9v\N*I\]{F`hK9sI81UĝKk(sBJIHF ʗ Ym(l_A9€;վer.E)sp#P"7i"AhJkʅ:etvbwRJ.. "zd1yNAI ƐLR}a7\N7ZT(&yf߲ FG"ٽ)n&3g˘_MvK܁ATF,R.J" =0I}Qb%:<%ᯏ˕WAkoP^TZq'0"WLZ eU 4[mKq". s(ke>7ȱQX"EU~n^lMrNywFBp}@*B@-W\1 K5p=qs"W_Zb6 Gj@'R|DpvC'-Qvv|ArJ' O@T'N;ǚ9z/!trQ5FNjo@K^Bj{bW)o@pu=Ԓ!'p|Nkʕp֚r]"q 2 !&ad oGs hE:-&\]d#@co$F:,φPB[ @cD+z:RM_,W:.l8 Z:#n. ӂB6ނ߀K1Zg$ 3sHNwο*:o%';x ց ν;%2Ig 3냣X+(!1@ k"1F?kF0n5kd>EH>nO1F CN#2t4*fkYs2ZT4Dl)Q>2Y!pTONDhOt-dx.l|=gUjE Oa5I'DY@Ֆ ́Cၷ.=HC xpi|}C c .CBRLhApϹ$ 2*(#VAJg=J~ȥQ=y=$mCmYQVbɎ+XDtz>_|t3m\lΕY?_W#ϗ=*;UՠO2BNrxBl|3^jIO07;W AоnxԚgMzwENʑA’j,kͮ=O>5"B Ml$<XBǒ'p%\,0wRqӒ>-ILs[Xf 3x\h&g[8sØ Z'<(=5yNPe>XYpA%"q]hK5T3V`/ !H%=-nߞJAۍ2CIH?IJt>[MK2VWNJ3X?ya82fazurZq1XG 8lBV\;4x)2t;~e>*ֳ$kO,[,ȥ=8m1?zbSW_Qtg?OǠe5q}p#f0Lrzh.8ݾwi4\8K捬+481`r]Cud-~]3 5C Gr~~y;h\fgO2NnMWt_o~ ~{͂w5? _?}=uͻ??9?"TVN "eQ'|.v"<T"ʠ67#ʕ7IZtY i`M ]%`Ԅaq;Ȃ Q(EnqASf1%IC:xH^Ja(&mںqz1@dܔܐ"2]є[^$I#D5 Ji찢&} # G"wD[& 2͟k-v~Z܏Z]hK5߈ZhtVSsTZE`ڂA1@}$ũk*M;[ Z3O#x7U]UݶOr?K5Y둎(nF47럿oN>!HVzexa^!T"LP;?B)?.'tX007<]*)F)E3:a`K&YO.Q^f0VƵmpB!rdXom|rkL2tԜ ]JKPUx i$p8r(4a?g)Ȧ<0OHj#dny~!gEJ(L4E-7Ѷ=޲PrxxTIrq<\΢Tg_@{@U-7 xAv{2z@_O S饚#wZ+*˟1eL X# ?t""憏%“DvguF&Y:Su;M#oIB~a~@[3;Z $ߌH[^29-DU؛Y-MX>hfJT j+~w$ hX/eWk:2 v r^͇40|y ʬAwfʃy\ԏZT/,e*v+d="䥿 k5 ղkFk;R_MPwg%=+Qʾi\$!q$SN/Nn:Ҡt|^|FQkιm[N2%̋8h77֕mD;nE=v~vCB"!SLVPsSvλakSp<[$ؖΦraWӔ|y}<i>V]HH#)EI#p}-ٯ{7K ?w-;4|1,fKVJ"tUz BJrQ( @Z*PBFQƇ K#br!xAsNׂ!p`4#I8Et"hW;T ֍QRH :w]܏əQRc!K*2%(PtXXhc&9S喽ZꋨhQ7']@"#4y#9koz0"6&\OȘ yCjp@ jE%([!323#ὨYġEE:pSB $Jd H4 Vɠ .P;Q'@[~e=@tA %kȆ/*xMoST>*2VEKKu9dJ1 _y)'9]"$'&Hck :rpl[" >:"pLDDtk PIե׈V`0{8p",)+݄)' ʶ;--l%$*S޻IA]Ζ"B&.qEDh(oy B0"J /NVC0m8mXG z͉Bd+(N>T8-MRV{:45\dJQ1pcN<3e#"w:{%Z$՟CSO@,BD"ar+h1NDn E4kS%V#tC"tUG' 㵎DpfR' LaP㬢2;B%5֙rL8\g\hfk">HDGhhd(qNDF'" 8iL%hiE.'7UV!N5\C(b_} =#sKD $5%ZⵚRxVx5t. Gk5卝eݖr.]xT0+ռy)u/y*bS:ɇZ‚4rF)-y>xmے*m|6h5W[{MζE)fFFCS KۥJ^vZ-$ÿ䬐!(\ JY &PSB>_wmїл\_S[qShD3!M$qO 쒍}a3}%oR+հwdĺg՜)PD8c;{7~0SuAytE:"O)0c:6,4lV鷔Ws$X+-هx#te{F,X,W,&~Z,Z> L4\YClk .jC'#W>r:IӪb4D3\U3b8Nu_Ug>鸴`3r +:?aN;xe/J$uFx>86:o.gqiMu'.{e5 f vAgƝ+f?qRdMFQ| ɲVB^FK*x7-e/nXsm-e6/! 66)#lvZ+^zU^r?|0[SGɜ} Ѭʂ5]Wnbi[֍(zQ$%Vz ![쁗W!bj}ZOA`rNB0L0MזO P7#y<)WTxZ-v%VKg[i,5l>\܊ Hbeժl? jm}אj}I=6-{iT|.iwa"|Y(qqhgSft3ɫ[Q._u]wS.Ys JG 1l^r.ZZ\セVnR'^be[!V*x'B1m{~2CբOwR( U_ٚtݟ}3b sZi<^uKz-ѕ,|G&_޸[~Gi<{f ,]-9"p%42!$i: iܔ5 `QB^ےZ+нQ)Bz%3P!qzM )Ÿc\Y|0F-u l -Ł8M;@PRA"3Ze.E1F#A@hL(N00LJfH#[tF޹eO=r%`i#ɟl[0hSUa?yFb{'^i/fH "HBUӇ|0Zr~Ӌnl1Z!q8o ="D@X3-K^gQ{`gdZ!BR0t8+!˚D3eOZ GZޔH{nZsJy;3O&c{2jFWP|ُ#]ܙaآaesv~LUn:az{2a¥IBm (Ik4&~i"菆oS ?"U?rhy{VbY_ SqjfYvqp{/:[-k&*4>tJǫ{<OՌ[&k[^{_.y"`M^JHDV3SXlX7"|7Epx jJ~wV^vww9^tP[C[kDܖLoi+1\+c)p#j| 1HkE1w[u}ͥ_\׌1-%U-FZj4n |0]燗=lv_ pH+ڋ/ GmP`ppjxo 3/ vpnY}|++h7y+`j~,SQZ8R4t*4Dʹ(= 0WT@Vie\nzA/7X{fH*k0n wN 11118ms (21:40:24.827) Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[554390551]: [11.118129414s] [11.118129414s] END Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.827787 4979 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: E0130 21:40:24.828960 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.829805 4979 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.831362 4979 trace.go:236] Trace[1187413104]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:40:14.597) (total time: 10233ms): Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1187413104]: ---"Objects listed" error: 10233ms (21:40:24.831) Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1187413104]: [10.233688728s] [10.233688728s] END Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.831403 4979 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.832108 4979 trace.go:236] Trace[1433974907]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:40:12.361) (total time: 12470ms): Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1433974907]: ---"Objects listed" error: 12470ms (21:40:24.832) Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1433974907]: [12.470825732s] [12.470825732s] END Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.832311 4979 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.832962 4979 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.834351 4979 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.857075 4979 csr.go:261] certificate signing request csr-kq9nt is approved, waiting to be issued Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.865825 4979 csr.go:257] certificate signing request csr-kq9nt is issued Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.989491 4979 apiserver.go:52] "Watching apiserver" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.001697 4979 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.002020 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.002480 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.002488 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.002557 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004088 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.004211 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004234 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004662 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004781 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.004877 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.008491 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.008510 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.009770 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.010670 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024018 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024041 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:10:42.862776223 +0000 UTC Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024359 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024372 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024606 4979 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024670 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.032822 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034388 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034436 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034454 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034469 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034485 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034547 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034576 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034591 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034608 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034640 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034657 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034674 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034691 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034708 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034741 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034775 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034818 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034837 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034868 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034885 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034909 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034926 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034947 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034966 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034983 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034998 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035022 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035060 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035078 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035093 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035110 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035126 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035141 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035156 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035172 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035189 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035206 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035242 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035273 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035289 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035306 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035321 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035335 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035371 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035385 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035402 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035432 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035449 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035464 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035481 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035497 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035512 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035527 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035542 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035564 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035581 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035598 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035618 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035638 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035654 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035673 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035706 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035722 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035739 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035755 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035770 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035785 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035800 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035814 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035830 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035846 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035910 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035924 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035942 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035958 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035974 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035991 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036044 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036063 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036081 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036099 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036118 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036134 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036151 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036168 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036185 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036201 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036219 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036236 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036252 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036286 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036302 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036319 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036336 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036352 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036388 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036403 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036419 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036451 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036467 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036483 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036501 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036521 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036539 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036555 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036571 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036588 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036605 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036764 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036781 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036797 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036829 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036844 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036882 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036898 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036914 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036963 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036979 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036995 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037010 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037288 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037314 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037659 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037936 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038173 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038206 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038223 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038241 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038264 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038285 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038302 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038323 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038340 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038375 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038391 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038408 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038424 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038440 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038456 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038472 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038487 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038504 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038522 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038540 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038557 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038573 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038589 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038618 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038637 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038639 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038655 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038674 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038691 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038707 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038742 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038775 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038792 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038809 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038827 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038844 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038859 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038893 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038911 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038927 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038945 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038961 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039011 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039070 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039104 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039167 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039708 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039739 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039773 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039804 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039836 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039915 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039976 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040002 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040057 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040165 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040186 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040201 4979 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.040273 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.040342 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.540317706 +0000 UTC m=+21.501564739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040566 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040800 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040936 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041468 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041530 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041473 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041705 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041361 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041757 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041881 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042144 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042778 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.044581 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.044769 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045356 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045614 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045847 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046135 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046329 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046769 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046872 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047175 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047479 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047691 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047966 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048008 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048235 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048337 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048494 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048565 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049160 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049212 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049222 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049827 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049838 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.049831 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.049938 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.549916625 +0000 UTC m=+21.511163658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050446 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050435 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050860 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050935 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051133 4979 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051203 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051598 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051713 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051946 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051979 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.052584 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.053378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.053491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.053637 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054019 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054140 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054165 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054813 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054909 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054948 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.062229 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.062509 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.062743 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063109 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063407 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063738 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.064207 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.064471 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.064508 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.066028 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.067264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.067915 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068601 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068790 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068958 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.069494 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072258 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072514 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072728 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072983 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.073618 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.073719 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074174 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074342 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074524 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074782 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.075047 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.075073 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.075216 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.575185276 +0000 UTC m=+21.536432499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.075437 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.077522 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.077786 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.077907 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078386 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078626 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078766 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079246 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079319 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079678 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079471 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079610 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.081357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082544 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082576 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082591 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082657 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.582634197 +0000 UTC m=+21.543881220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.082769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.084958 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.085561 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.085916 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086292 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086221 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086507 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086633 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086639 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086759 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086960 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086968 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087430 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087436 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087660 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087953 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088010 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088130 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088311 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088780 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088802 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.089062 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.089206 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.089423 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.090243 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086089 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.090680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.090766 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.091612 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.092457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.092834 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.092933 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.094860 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095079 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095479 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095564 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095846 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095896 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.096252 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.096798 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.096917 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.099304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087248 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.099821 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100155 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100224 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100356 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100995 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.101613 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.103069 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.103916 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.103941 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.104268 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.104551 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.105402 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.106485 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.107102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.107342 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.107366 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108167 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108219 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108335 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.108355 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.607431576 +0000 UTC m=+21.568678609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108331 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108565 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108983 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.110852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.111129 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.112729 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.113397 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.114606 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.116258 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.116835 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.117222 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.118118 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.118872 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.118866 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119137 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119219 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119622 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119683 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.120088 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.120490 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.121692 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.121779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.121859 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.122170 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.122241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.122391 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.124790 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.127534 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.127749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.130482 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.133758 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.137405 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138231 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138479 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138901 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.139716 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.141146 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.141717 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.142326 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.142666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143046 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143123 4979 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143154 4979 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143185 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143201 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143220 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143238 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143253 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143270 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143284 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143298 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143310 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143213 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143361 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143379 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143399 4979 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143412 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143424 4979 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143435 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143454 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143468 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143482 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143506 4979 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143524 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143536 4979 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143548 4979 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143567 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143585 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143598 4979 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143611 4979 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143626 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143638 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143650 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143662 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143676 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143687 4979 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143698 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143712 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143728 4979 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143740 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143752 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143768 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143779 4979 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143791 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143821 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143837 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143854 4979 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143892 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143904 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143918 4979 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143930 4979 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143942 4979 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143953 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143980 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143994 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144008 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144024 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144058 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144074 4979 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144087 4979 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144116 4979 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144152 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144165 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144177 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144194 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144208 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144221 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144237 4979 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144250 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144263 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144275 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144312 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144328 4979 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144342 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144359 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144376 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144390 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144403 4979 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144415 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144431 4979 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144444 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144456 4979 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144476 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144490 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144501 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144513 4979 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144532 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144544 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144555 4979 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144567 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144583 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144597 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144658 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144677 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144690 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144705 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144718 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144734 4979 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144746 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144760 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144774 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144791 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144805 4979 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144817 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144829 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144844 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144856 4979 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144867 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144884 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144896 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144908 4979 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144947 4979 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144965 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144979 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144993 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145007 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145023 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145057 4979 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145072 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145091 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145105 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145119 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145133 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145149 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145162 4979 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145176 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145189 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145208 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145242 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145258 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145270 4979 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145288 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145301 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145314 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145332 4979 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145344 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145358 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145371 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145398 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145412 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145425 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145438 4979 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145455 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145469 4979 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145484 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145499 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145513 4979 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145526 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145538 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145566 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145578 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145589 4979 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145601 4979 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145617 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145632 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145645 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145661 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145673 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145686 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145700 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145736 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145751 4979 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145764 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145779 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145795 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145808 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145822 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145836 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145853 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145876 4979 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145891 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145907 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145921 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145934 4979 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145947 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145964 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145978 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145992 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.146006 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148270 4979 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148312 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148326 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148340 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.150454 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151139 4979 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151176 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151187 4979 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151198 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151213 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151215 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.152057 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.153347 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.153729 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.155095 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.156384 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.158333 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.160841 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.162673 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.163814 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.164737 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.166243 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.166904 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.167187 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.168720 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.169090 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.170241 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.170906 4979 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.171679 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.174736 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.175387 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.175947 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.177899 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.179095 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.179736 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.180898 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.181819 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.182800 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.183007 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.183674 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.185190 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.186326 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.186943 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.187642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.188665 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.189581 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.190960 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.191592 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.192878 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.193535 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.194299 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.195262 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.195557 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.196086 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.201076 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.208179 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.220310 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.235104 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.247082 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252223 4979 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252261 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252271 4979 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252281 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.256886 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.269546 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.281052 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.293399 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.305082 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.322374 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.342860 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.366441 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.388385 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.448944 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.462774 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.475594 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.488358 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.502512 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.515470 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.532841 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.555885 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.555968 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.556008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.556072 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.556049835 +0000 UTC m=+22.517296868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.556151 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.556210 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.556197638 +0000 UTC m=+22.517444671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.578288 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.657442 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.657537 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.657569 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657701 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657728 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657739 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657848 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.657710247 +0000 UTC m=+22.618957280 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657917 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.657906232 +0000 UTC m=+22.619153265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657908 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657971 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657991 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.658103 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.658076116 +0000 UTC m=+22.619323319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.867195 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 21:35:24 +0000 UTC, rotation deadline is 2026-11-07 08:35:25.253961084 +0000 UTC Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.867288 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6730h54m59.386675896s for next certificate rotation Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.025088 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:18:26.718348035 +0000 UTC Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.219504 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.219564 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"33c38ed5a670798ba0108c80b24ba1dbac83bfc637d1afc0476a86ce5f3037e2"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.222166 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.222227 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.222240 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0034876ce8c1f15d39ab53cac1d8ecd7f0ca27691a6438d9e37b78544eafb308"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.223789 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"801a5a05057a522df7aae470fe16721dc47b25237887153178ded5b7952d2ec1"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.232160 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.232513 4979 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.247051 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.257111 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.265660 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.276846 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.289023 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.299070 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.313793 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.324607 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.333737 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kqsqg"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.334138 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p8nz9"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.334295 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.334413 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: W0130 21:40:26.336062 4979 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.336128 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.337013 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.337259 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338437 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338498 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338656 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338918 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.339190 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.340871 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.354979 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28767351-ec5c-4f9e-8b01-2954eaf4ea30-mcd-auth-proxy-config\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhmw\" (UniqueName: \"kubernetes.io/projected/01c7f257-42d4-4934-805e-7f5d80988fa3-kube-api-access-lkhmw\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363872 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28767351-ec5c-4f9e-8b01-2954eaf4ea30-rootfs\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363888 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdwj\" (UniqueName: \"kubernetes.io/projected/28767351-ec5c-4f9e-8b01-2954eaf4ea30-kube-api-access-8zdwj\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363912 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28767351-ec5c-4f9e-8b01-2954eaf4ea30-proxy-tls\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363928 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01c7f257-42d4-4934-805e-7f5d80988fa3-hosts-file\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.372830 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.388499 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.405734 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.423903 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.437703 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.451349 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464076 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464552 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28767351-ec5c-4f9e-8b01-2954eaf4ea30-proxy-tls\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01c7f257-42d4-4934-805e-7f5d80988fa3-hosts-file\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464638 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28767351-ec5c-4f9e-8b01-2954eaf4ea30-mcd-auth-proxy-config\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhmw\" (UniqueName: \"kubernetes.io/projected/01c7f257-42d4-4934-805e-7f5d80988fa3-kube-api-access-lkhmw\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464673 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28767351-ec5c-4f9e-8b01-2954eaf4ea30-rootfs\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464688 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdwj\" (UniqueName: \"kubernetes.io/projected/28767351-ec5c-4f9e-8b01-2954eaf4ea30-kube-api-access-8zdwj\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01c7f257-42d4-4934-805e-7f5d80988fa3-hosts-file\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28767351-ec5c-4f9e-8b01-2954eaf4ea30-rootfs\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.465701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28767351-ec5c-4f9e-8b01-2954eaf4ea30-mcd-auth-proxy-config\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.478152 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.482328 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhmw\" (UniqueName: \"kubernetes.io/projected/01c7f257-42d4-4934-805e-7f5d80988fa3-kube-api-access-lkhmw\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.482971 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdwj\" (UniqueName: \"kubernetes.io/projected/28767351-ec5c-4f9e-8b01-2954eaf4ea30-kube-api-access-8zdwj\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.494940 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.508657 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.521868 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.533409 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.565606 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.565661 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565788 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565807 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565862 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.565838518 +0000 UTC m=+24.527085551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565905 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.565881199 +0000 UTC m=+24.527128232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.659667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.666072 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.666204 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.666244 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666406 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666435 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666449 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666512 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.666490453 +0000 UTC m=+24.627737486 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666582 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.666574235 +0000 UTC m=+24.627821268 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666638 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666650 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666659 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666686 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.666678638 +0000 UTC m=+24.627925671 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: W0130 21:40:26.674729 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c7f257_42d4_4934_805e_7f5d80988fa3.slice/crio-cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d WatchSource:0}: Error finding container cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d: Status 404 returned error can't find the container with id cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.717299 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-75j89"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.718062 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.719868 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xh5mg"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.720173 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.720462 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.720660 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721142 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721315 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721400 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721493 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721944 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.727223 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.731326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.732363 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.738048 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.750463 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766630 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766709 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-netns\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766742 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-etc-kubernetes\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766763 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-os-release\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-daemon-config\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766818 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-multus-certs\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766890 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gr57\" (UniqueName: \"kubernetes.io/projected/6722e8df-a635-4808-b6b9-d5633fc3d34b-kube-api-access-8gr57\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-k8s-cni-cncf-io\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766986 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-system-cni-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767072 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-hostroot\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767091 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767116 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-cnibin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-bin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-os-release\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-socket-dir-parent\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767188 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-multus\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767220 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767238 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-cni-binary-copy\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767254 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-kubelet\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767274 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-system-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767289 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-conf-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767307 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m46p\" (UniqueName: \"kubernetes.io/projected/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-kube-api-access-6m46p\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767358 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cnibin\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767385 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-binary-copy\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.780563 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.795350 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.811271 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.826912 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.841809 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.858016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869049 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-system-cni-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-system-cni-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869120 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869252 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-hostroot\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869452 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-cnibin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-bin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869521 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-os-release\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869542 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-socket-dir-parent\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869547 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-cnibin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869565 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-multus\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-multus\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869638 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-kubelet\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869671 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-socket-dir-parent\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869693 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-cni-binary-copy\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869724 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-system-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869749 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-conf-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869755 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-kubelet\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869778 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m46p\" (UniqueName: \"kubernetes.io/projected/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-kube-api-access-6m46p\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cnibin\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-binary-copy\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-netns\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869895 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-etc-kubernetes\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869918 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-os-release\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-daemon-config\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-os-release\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870051 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-multus-certs\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869722 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-bin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870084 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gr57\" (UniqueName: \"kubernetes.io/projected/6722e8df-a635-4808-b6b9-d5633fc3d34b-kube-api-access-8gr57\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870116 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-conf-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870121 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870170 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-k8s-cni-cncf-io\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870240 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-k8s-cni-cncf-io\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870516 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-etc-kubernetes\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870547 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-netns\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869808 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-system-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870017 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-os-release\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870599 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-multus-certs\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870646 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cnibin\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870858 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.871478 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-hostroot\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.872208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.872418 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-cni-binary-copy\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.872438 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-daemon-config\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.873006 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-binary-copy\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.886566 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.889106 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gr57\" (UniqueName: \"kubernetes.io/projected/6722e8df-a635-4808-b6b9-d5633fc3d34b-kube-api-access-8gr57\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.889378 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m46p\" (UniqueName: \"kubernetes.io/projected/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-kube-api-access-6m46p\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.900865 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.914121 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.928877 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.944826 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.959930 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.973945 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.991592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.012514 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.025272 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:49:12.020635742 +0000 UTC Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.031519 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.045001 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.055860 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.056466 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.062886 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xh5mg" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.070281 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.070298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.070300 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:27 crc kubenswrapper[4979]: E0130 21:40:27.070417 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:27 crc kubenswrapper[4979]: E0130 21:40:27.070507 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:27 crc kubenswrapper[4979]: E0130 21:40:27.070599 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.076622 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.077531 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.078774 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.079454 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.080121 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.081153 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.081833 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: W0130 21:40:27.098122 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6722e8df_a635_4808_b6b9_d5633fc3d34b.slice/crio-6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff WatchSource:0}: Error finding container 6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff: Status 404 returned error can't find the container with id 6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.112074 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.113022 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.115897 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.116225 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.129072 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.130009 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.130009 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.130205 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.136058 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.161487 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174858 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174937 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174956 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174975 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174993 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175061 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175077 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175092 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175107 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175256 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175277 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175293 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175308 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175328 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175361 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.191170 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.212705 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.228918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerStarted","Data":"5638f5d00b204f802db25c86ead6d0695eac9f3235ca33932926822665e620ff"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.228913 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.232113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p8nz9" event={"ID":"01c7f257-42d4-4934-805e-7f5d80988fa3","Type":"ContainerStarted","Data":"d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.232160 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p8nz9" event={"ID":"01c7f257-42d4-4934-805e-7f5d80988fa3","Type":"ContainerStarted","Data":"cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.234652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.243200 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.257252 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276250 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276268 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276286 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276348 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276367 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276389 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276410 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276410 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276458 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276502 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276519 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276575 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276594 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276610 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276643 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276661 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276758 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276802 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276442 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.277071 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.277752 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276511 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.283660 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.283758 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276516 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.284364 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.284944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.297700 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.303054 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.313078 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.327532 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.340241 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.355733 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.368506 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.369831 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.378589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28767351-ec5c-4f9e-8b01-2954eaf4ea30-proxy-tls\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.384017 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.403763 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.418868 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.430867 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.436741 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.445052 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: W0130 21:40:27.449009 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ce4851_1ecc_47da_89ca_09894eb0908a.slice/crio-f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4 WatchSource:0}: Error finding container f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4: Status 404 returned error can't find the container with id f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4 Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.460298 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.480209 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.498186 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.519098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.551477 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.555465 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.592239 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.630025 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: W0130 21:40:27.657547 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28767351_ec5c_4f9e_8b01_2954eaf4ea30.slice/crio-b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae WatchSource:0}: Error finding container b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae: Status 404 returned error can't find the container with id b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.670513 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.026309 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:42:58.938881971 +0000 UTC Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.240481 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" exitCode=0 Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.240558 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.240620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.242375 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.245152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.247183 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d" exitCode=0 Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.247315 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.249563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.249610 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.249624 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.259098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.274701 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.287669 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.305011 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.320431 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.341469 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.358237 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.375085 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.391288 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.406897 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.427723 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.442947 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.460115 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.477016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.495473 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.516654 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.532592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.546937 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.562428 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.579269 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.595309 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.595359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595475 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595505 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595537 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.595521398 +0000 UTC m=+28.556768431 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595613 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.59558641 +0000 UTC m=+28.556833513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.600695 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.620056 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.636789 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.653202 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.677134 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.696316 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.696443 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.696481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696590 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696598 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696609 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696625 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696626 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696641 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696599 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.696559273 +0000 UTC m=+28.657806346 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696725 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.696703057 +0000 UTC m=+28.657950090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696742 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.696734178 +0000 UTC m=+28.657981321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.712802 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.027432 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:24:51.209630003 +0000 UTC Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.069274 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.069393 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.069274 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:29 crc kubenswrapper[4979]: E0130 21:40:29.069479 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:29 crc kubenswrapper[4979]: E0130 21:40:29.069566 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:29 crc kubenswrapper[4979]: E0130 21:40:29.069648 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.257602 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae" exitCode=0 Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.257701 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.263670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.263764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.263803 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.264060 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.264091 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.275503 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.293124 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.312529 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.328991 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.352602 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.361677 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-f2xld"] Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.365793 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371445 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371550 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371589 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371753 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.391535 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.406338 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zncl\" (UniqueName: \"kubernetes.io/projected/65d4cf3f-dc90-408a-9652-740d7472fb39-kube-api-access-5zncl\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.406374 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65d4cf3f-dc90-408a-9652-740d7472fb39-serviceca\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.406424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65d4cf3f-dc90-408a-9652-740d7472fb39-host\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.410271 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.428906 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.445756 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.461842 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.477326 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.492982 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506862 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zncl\" (UniqueName: \"kubernetes.io/projected/65d4cf3f-dc90-408a-9652-740d7472fb39-kube-api-access-5zncl\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65d4cf3f-dc90-408a-9652-740d7472fb39-serviceca\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506959 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65d4cf3f-dc90-408a-9652-740d7472fb39-host\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.507095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65d4cf3f-dc90-408a-9652-740d7472fb39-host\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.508451 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65d4cf3f-dc90-408a-9652-740d7472fb39-serviceca\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.521686 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.526977 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zncl\" (UniqueName: \"kubernetes.io/projected/65d4cf3f-dc90-408a-9652-740d7472fb39-kube-api-access-5zncl\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.537675 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.550375 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.564328 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.577767 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.593141 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.614277 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.650784 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.691574 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.692330 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.734602 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.775289 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.814857 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.855248 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.891066 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.027832 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:58:38.425999164 +0000 UTC Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.270407 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f2xld" event={"ID":"65d4cf3f-dc90-408a-9652-740d7472fb39","Type":"ContainerStarted","Data":"da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.270492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f2xld" event={"ID":"65d4cf3f-dc90-408a-9652-740d7472fb39","Type":"ContainerStarted","Data":"9d57eac96d748a9a1f760d1fdd0b0fb1bb1445b853af54abb7355c05fa5e86d5"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.273612 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38" exitCode=0 Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.273711 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.278468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.291019 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.306597 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.320950 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.341196 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.355262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.368599 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.382773 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.396707 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.411672 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.427776 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.440519 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.460020 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.476507 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.490380 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.507389 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.533099 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.572403 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.617569 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.654490 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.695245 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.731966 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.772858 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.810556 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.853277 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.890150 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.935245 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.973689 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.013025 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.028287 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:18:24.995427886 +0000 UTC Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.068846 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.068861 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.069010 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.069166 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.069185 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.069370 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.229826 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232294 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232467 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.240513 4979 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.240818 4979 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242060 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242127 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242149 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242161 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.260880 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265428 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265488 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.285554 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510" exitCode=0 Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.285618 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510"} Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.286142 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291249 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291261 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291278 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291292 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.298667 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.305300 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310267 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.317060 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.324246 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.327948 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328132 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328753 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.341723 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.341849 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.343421 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344358 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344369 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344386 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344397 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.358564 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.373681 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.387916 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.402882 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.419065 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447216 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447279 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447315 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.451726 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.492437 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.540060 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551494 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.573803 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.619901 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653813 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653841 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756684 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756695 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756732 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860681 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860702 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860715 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963661 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963732 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963744 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.029149 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:30:43.412397541 +0000 UTC Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067898 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067960 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.171933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172092 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172129 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172153 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276082 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276106 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276117 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.295763 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6" exitCode=0 Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.296055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.303578 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.312098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.329901 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.342743 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.359993 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378746 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378762 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.379989 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.396532 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.414750 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.430727 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.445730 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.463069 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.480630 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481522 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.501926 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.522940 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.541969 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.584937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.584974 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.584984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.585000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.585010 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.641926 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.642017 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642267 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642359 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.642332749 +0000 UTC m=+36.603579822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642427 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642482 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.642464343 +0000 UTC m=+36.603711416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688637 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.743568 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.743778 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.743868 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.743827047 +0000 UTC m=+36.705074140 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.743958 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744011 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744077 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744098 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744180 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.744154346 +0000 UTC m=+36.705401419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744291 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744331 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744347 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744426 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.744406412 +0000 UTC m=+36.705653445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791726 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895273 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.999011 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.030329 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:51:57.898573758 +0000 UTC Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.069508 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.069607 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:33 crc kubenswrapper[4979]: E0130 21:40:33.069766 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.069873 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:33 crc kubenswrapper[4979]: E0130 21:40:33.070018 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:33 crc kubenswrapper[4979]: E0130 21:40:33.070188 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101771 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101930 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204793 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204833 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306901 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306965 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.311522 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f" exitCode=0 Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.311574 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.327748 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.343933 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.360000 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.375023 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.391652 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.408228 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409213 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409225 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.422989 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.438093 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.453149 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.470433 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.487465 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.499931 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.515794 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.540108 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615904 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615937 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718662 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718689 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.820949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.820999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.821010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.821057 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.821073 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929420 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929472 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929505 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.030498 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:53:12.520429592 +0000 UTC Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032551 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032591 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135646 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135677 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135688 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239611 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.320194 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerStarted","Data":"11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.325831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.326690 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.326762 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.337638 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343718 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.354664 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.359438 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.362168 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.367786 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.383259 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.399176 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.416208 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.434971 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446575 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.456437 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.471830 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.486334 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.499534 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.510320 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.521208 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.532659 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.544799 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548814 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548868 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548896 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.564011 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.578269 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.590934 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.602259 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.616159 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.627611 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.637408 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.647207 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650826 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650845 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650857 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.665463 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.676701 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.696356 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.712893 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.729955 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753471 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.777917 4979 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856555 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856588 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.960996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961404 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961437 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961452 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.031112 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:26:58.92575798 +0000 UTC Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065625 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.069245 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:35 crc kubenswrapper[4979]: E0130 21:40:35.069438 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.069581 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.069584 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:35 crc kubenswrapper[4979]: E0130 21:40:35.069681 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:35 crc kubenswrapper[4979]: E0130 21:40:35.069775 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.091643 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.110420 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.126479 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.145643 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168159 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168202 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.175301 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.199359 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.218262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.234874 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.251843 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.265751 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.270942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.270979 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.270990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.271006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.271017 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.278917 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.301835 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.323358 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.329141 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.339872 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374202 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374306 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477627 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477663 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.580958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581005 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581054 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581065 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684567 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788258 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788318 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891436 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891554 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994421 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994500 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.032243 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:51:16.520386678 +0000 UTC Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097209 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200235 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200283 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303606 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303676 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303693 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303734 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.332087 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407631 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.511002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.511687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.511868 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.512095 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.512286 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615598 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615627 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718848 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718934 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718963 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718980 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822394 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.925877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.925956 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.925973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.926003 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.926024 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029185 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029198 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.033136 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:16:39.941201941 +0000 UTC Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.069174 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.069199 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:37 crc kubenswrapper[4979]: E0130 21:40:37.069347 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:37 crc kubenswrapper[4979]: E0130 21:40:37.069384 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.069288 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:37 crc kubenswrapper[4979]: E0130 21:40:37.069464 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131610 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131642 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234440 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234493 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234508 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234557 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336646 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440552 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440571 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440595 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440612 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543624 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543673 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543697 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646318 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646347 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749058 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749104 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749115 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749160 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855809 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855832 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855842 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.958993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959091 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959147 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.033725 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:45:36.440946489 +0000 UTC Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061546 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061573 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164255 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164390 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267231 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267263 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.341071 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/0.log" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.343834 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940" exitCode=1 Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.343874 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.344868 4979 scope.go:117] "RemoveContainer" containerID="feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.360098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388267 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.403672 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.427592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.449552 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.464371 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.476329 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.489443 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491241 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491316 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.503323 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.518144 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.532003 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.555214 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.572366 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.590485 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.594935 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.594975 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.594987 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.595005 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.595017 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.608767 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700023 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700157 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700182 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803290 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906683 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906789 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906857 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.010881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.010957 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.010977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.011002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.011022 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.034364 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:26:54.226699242 +0000 UTC Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.069382 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.069463 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:39 crc kubenswrapper[4979]: E0130 21:40:39.069577 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:39 crc kubenswrapper[4979]: E0130 21:40:39.069795 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.069884 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:39 crc kubenswrapper[4979]: E0130 21:40:39.070539 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.113907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114483 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114746 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.184305 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9"] Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.185273 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.187655 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.188320 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.214105 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218531 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218598 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218620 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218634 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.253059 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.276016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.302927 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.319593 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321383 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321435 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321504 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vl5d\" (UniqueName: \"kubernetes.io/projected/cb7a0992-0b0f-4219-ac47-fb6021840903-kube-api-access-2vl5d\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321866 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321905 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7a0992-0b0f-4219-ac47-fb6021840903-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321985 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.339354 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.353664 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.371068 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.389942 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.404954 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.420974 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422763 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vl5d\" (UniqueName: \"kubernetes.io/projected/cb7a0992-0b0f-4219-ac47-fb6021840903-kube-api-access-2vl5d\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422830 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7a0992-0b0f-4219-ac47-fb6021840903-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424151 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424647 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424749 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.425892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.433593 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7a0992-0b0f-4219-ac47-fb6021840903-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.436550 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.445239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vl5d\" (UniqueName: \"kubernetes.io/projected/cb7a0992-0b0f-4219-ac47-fb6021840903-kube-api-access-2vl5d\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.460811 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.496532 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.501378 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.508255 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: W0130 21:40:39.516854 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb7a0992_0b0f_4219_ac47_fb6021840903.slice/crio-ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24 WatchSource:0}: Error finding container ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24: Status 404 returned error can't find the container with id ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24 Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527211 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630778 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733789 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733860 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733905 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836793 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836899 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940514 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940624 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940651 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.035321 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:50:46.553411228 +0000 UTC Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044840 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044941 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044961 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148983 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252555 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252570 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.355962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356178 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.357713 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/0.log" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.362013 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.364219 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" event={"ID":"cb7a0992-0b0f-4219-ac47-fb6021840903","Type":"ContainerStarted","Data":"ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459708 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565610 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565640 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565666 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668638 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668782 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668806 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.716145 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pk47q"] Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.716733 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.716816 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.736413 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.738934 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.739004 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739182 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739310 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.739272911 +0000 UTC m=+52.700520004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739441 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739597 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.739563139 +0000 UTC m=+52.700810202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.760384 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772451 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.779157 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.798352 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.817575 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.839257 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840070 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840225 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.840183103 +0000 UTC m=+52.801430136 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840304 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmzk\" (UniqueName: \"kubernetes.io/projected/d0632938-c88a-4c22-b0e7-8f7473532f07-kube-api-access-jbmzk\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840465 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840702 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840723 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840739 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840733 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840766 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840780 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840793 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.840781289 +0000 UTC m=+52.802028442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840838 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.84081687 +0000 UTC m=+52.802063923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.857285 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.872565 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875553 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.889113 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.908119 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.921610 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.941920 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.942061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmzk\" (UniqueName: \"kubernetes.io/projected/d0632938-c88a-4c22-b0e7-8f7473532f07-kube-api-access-jbmzk\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.942260 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.942415 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:41.442374619 +0000 UTC m=+37.403621692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.951544 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.960837 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmzk\" (UniqueName: \"kubernetes.io/projected/d0632938-c88a-4c22-b0e7-8f7473532f07-kube-api-access-jbmzk\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.973675 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979243 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979360 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.987966 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.003354 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.020872 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.036080 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:40:27.320127802 +0000 UTC Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.069116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.069187 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.069255 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.069412 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.069556 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.069692 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.082926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.082993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.083014 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.083081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.083103 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186832 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186905 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290783 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290834 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290854 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394356 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394371 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394407 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.448818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.449010 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.449115 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:42.449093745 +0000 UTC m=+38.410340788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497339 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497352 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600930 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600982 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.601008 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705475 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705556 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705643 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744309 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.767459 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773240 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773306 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773324 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773375 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.795295 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801935 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801971 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801989 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.824890 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831624 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831863 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.855117 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861810 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861837 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861855 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.884361 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.884663 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887308 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990712 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990786 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.036901 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:11:01.261467903 +0000 UTC Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.068942 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:42 crc kubenswrapper[4979]: E0130 21:40:42.069118 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094180 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094279 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094300 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094348 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197397 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197445 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197475 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197488 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305437 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305516 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305582 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.375199 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.394586 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409168 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409230 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409262 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.412241 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.430238 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.454830 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.460922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:42 crc kubenswrapper[4979]: E0130 21:40:42.461190 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:42 crc kubenswrapper[4979]: E0130 21:40:42.461313 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:44.461277063 +0000 UTC m=+40.422524136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.471128 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.487909 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.508236 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513430 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.524193 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.549937 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.574650 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.601305 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616172 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616375 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.619644 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.639323 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.660399 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.682489 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.706849 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718822 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718869 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822241 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822347 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822366 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925346 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925416 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925435 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925447 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.028989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029043 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029056 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029083 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.037378 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:22:18.801961418 +0000 UTC Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.069369 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.069533 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.069448 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131655 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131706 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234627 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234695 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338420 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338482 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338518 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.385522 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.386634 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/0.log" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.390068 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" exitCode=1 Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.390145 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.390239 4979 scope.go:117] "RemoveContainer" containerID="feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.391963 4979 scope.go:117] "RemoveContainer" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.392518 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.392695 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" event={"ID":"cb7a0992-0b0f-4219-ac47-fb6021840903","Type":"ContainerStarted","Data":"f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.392721 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" event={"ID":"cb7a0992-0b0f-4219-ac47-fb6021840903","Type":"ContainerStarted","Data":"ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.406714 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.422181 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.433980 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442242 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.454233 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.473572 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.489695 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.509142 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.523382 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.538661 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545307 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545358 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545395 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.554211 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.570574 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.595951 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.613100 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.625119 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.641558 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.647980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648092 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.655683 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.670134 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.684718 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.701699 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.716674 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.731754 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.745557 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750662 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750677 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750713 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.762770 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.774904 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.790137 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.804775 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.823490 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.835669 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.848760 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853864 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853882 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853894 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.868235 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.883382 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.898203 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.956928 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.956981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.956993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.957012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.957024 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.037755 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:22:38.311155926 +0000 UTC Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.059940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060004 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060106 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.069350 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:44 crc kubenswrapper[4979]: E0130 21:40:44.069588 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168642 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272387 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272451 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.375998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376112 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376127 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376151 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376165 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.397338 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479929 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479991 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.484709 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:44 crc kubenswrapper[4979]: E0130 21:40:44.485070 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:44 crc kubenswrapper[4979]: E0130 21:40:44.485184 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:48.485150617 +0000 UTC m=+44.446397830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583419 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583499 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583534 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686630 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789754 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789801 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789829 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789839 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892330 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892351 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892370 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892384 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021856 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.038938 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:30:23.914545118 +0000 UTC Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.069116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.069168 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.069301 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:45 crc kubenswrapper[4979]: E0130 21:40:45.069314 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:45 crc kubenswrapper[4979]: E0130 21:40:45.069447 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:45 crc kubenswrapper[4979]: E0130 21:40:45.069626 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.087218 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.101599 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.117263 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125415 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125452 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125463 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125495 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.133020 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.149430 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.163531 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.180668 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.199416 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.215225 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.228725 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229098 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229470 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.230422 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.246175 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.296589 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.313483 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.327927 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332362 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.346835 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.361640 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.434980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435058 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435069 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539365 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.642927 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.642983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.642999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.643021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.643055 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746647 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746714 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746728 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746751 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746766 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.850949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851080 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851158 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953677 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953762 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.039481 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:38:16.947750316 +0000 UTC Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057005 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057084 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057124 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057140 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.068931 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:46 crc kubenswrapper[4979]: E0130 21:40:46.069177 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.160001 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263769 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263802 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.367784 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368182 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368741 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472459 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472551 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472597 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575645 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575680 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575708 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679156 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781510 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884860 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884971 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884988 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988526 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988564 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.040387 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:06:53.587982159 +0000 UTC Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.069149 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.069148 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:47 crc kubenswrapper[4979]: E0130 21:40:47.069333 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.069166 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:47 crc kubenswrapper[4979]: E0130 21:40:47.069593 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:47 crc kubenswrapper[4979]: E0130 21:40:47.069663 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.091951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092127 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092179 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.196311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.196734 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.196876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.197097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.197285 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404143 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507452 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507594 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610819 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610882 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610940 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714554 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714587 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817684 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817803 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921249 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921316 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921352 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024009 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024586 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.041438 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:15:10.693067657 +0000 UTC Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.069171 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:48 crc kubenswrapper[4979]: E0130 21:40:48.069329 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128401 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128416 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128442 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128458 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231554 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231566 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334497 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334577 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440047 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440142 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440183 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.535331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:48 crc kubenswrapper[4979]: E0130 21:40:48.535595 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:48 crc kubenswrapper[4979]: E0130 21:40:48.535734 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.53570251 +0000 UTC m=+52.496949723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543864 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543875 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646971 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750385 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853251 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853300 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955387 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955445 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955483 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.041629 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:41:02.433061209 +0000 UTC Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.057956 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058055 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058088 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.069383 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.069416 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.069383 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:49 crc kubenswrapper[4979]: E0130 21:40:49.069562 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:49 crc kubenswrapper[4979]: E0130 21:40:49.069724 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:49 crc kubenswrapper[4979]: E0130 21:40:49.069849 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.160491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.160856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.160980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.161140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.161233 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264281 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264332 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367051 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367669 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471716 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471795 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471806 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575210 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575307 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575342 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.678618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679234 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679543 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783637 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990551 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990637 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990685 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.042755 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:21:58.92873465 +0000 UTC Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.069118 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:50 crc kubenswrapper[4979]: E0130 21:40:50.069315 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093364 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093412 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093444 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093458 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196135 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196295 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299508 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299520 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.402961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403127 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506520 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610361 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610381 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610430 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.713981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714063 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714076 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714114 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817103 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817149 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919600 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919641 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919655 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022765 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022801 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.043322 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:58:18.792489055 +0000 UTC Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.069961 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:51 crc kubenswrapper[4979]: E0130 21:40:51.070192 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.070276 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.070356 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:51 crc kubenswrapper[4979]: E0130 21:40:51.070484 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:51 crc kubenswrapper[4979]: E0130 21:40:51.070656 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125756 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125817 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125832 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125888 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229535 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229626 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332590 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435754 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435914 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435940 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540207 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540251 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540269 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643702 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643843 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.747931 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748082 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748148 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.898943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899027 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899089 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002692 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002746 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025438 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025496 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.043522 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:41:44.753192857 +0000 UTC Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.050162 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057086 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057251 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.068665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.068824 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.077052 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082623 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.095921 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.099949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.099997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.100007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.100024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.100052 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.114510 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.119003 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.131811 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.131951 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135115 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135150 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238962 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342185 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444882 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.445001 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546754 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546813 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546824 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649220 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649302 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751703 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751734 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.854918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.854989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.855002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.855023 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.855067 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958087 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.044213 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:07:15.270365459 +0000 UTC Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061339 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061386 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.069882 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.069939 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.069967 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:53 crc kubenswrapper[4979]: E0130 21:40:53.070103 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:53 crc kubenswrapper[4979]: E0130 21:40:53.070234 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:53 crc kubenswrapper[4979]: E0130 21:40:53.070397 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.163712 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164321 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164463 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164534 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267820 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267875 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267901 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.371824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372242 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.487759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.488825 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.488991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.489162 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.489385 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592419 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592516 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696452 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696498 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799151 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799291 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799355 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902840 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902917 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005085 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005160 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005192 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.044761 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:24:49.023718232 +0000 UTC Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.069594 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:54 crc kubenswrapper[4979]: E0130 21:40:54.069770 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.071107 4979 scope.go:117] "RemoveContainer" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.086231 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.103920 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109661 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109716 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109730 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109765 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.118456 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.134265 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.148144 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.167056 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.183863 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.197750 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214241 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214719 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.230461 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.253220 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.280431 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.297634 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.308345 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320304 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.323633 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.336836 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422675 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.438527 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.441255 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.441401 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.456410 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.475631 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.490555 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.507242 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.520099 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524909 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.525004 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.551093 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.572817 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.595993 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.612486 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627647 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.633210 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.649085 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.661687 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.678813 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.693262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.706739 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.726664 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730432 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730465 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833673 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833688 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937688 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041234 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041278 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.045905 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:00:13.894151515 +0000 UTC Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.069166 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.069295 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.069342 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.069318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.069496 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.069771 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.087462 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.100077 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.114248 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.126760 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.137553 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143901 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143948 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143976 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.148768 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.161026 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.175828 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.192662 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.212756 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.227562 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248009 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248138 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248227 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.252086 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.271933 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.290917 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.306318 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.321299 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.350883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351594 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.446859 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.447812 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.450990 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" exitCode=1 Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.451055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.451099 4979 scope.go:117] "RemoveContainer" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.451959 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.452166 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455262 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455294 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455324 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455337 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.469555 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.488394 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.516148 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.528009 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.539587 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558442 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558458 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.559628 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.576494 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.593741 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.605764 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.619951 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.634870 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.651208 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662378 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.667083 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.679234 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.698889 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.710430 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.765944 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766016 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766084 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766104 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870088 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870270 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973258 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973278 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973291 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973987 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.046332 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:58:51.895008686 +0000 UTC Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.068645 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.068831 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076877 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180111 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180180 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180240 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.282920 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.282983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.282994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.283013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.283028 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385794 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385865 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385900 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.463090 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.468517 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.468761 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.485541 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490118 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490185 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490228 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.503980 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.519420 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.542387 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.557559 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.571254 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.587578 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.592969 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593570 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593646 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.593152 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.594286 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:12.594264007 +0000 UTC m=+68.555511040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.604652 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.619094 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.633724 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.645645 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.661484 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.677359 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695406 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695415 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695430 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695440 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.697767 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.716359 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.734268 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.795434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.795504 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795667 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795736 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795842 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.795814433 +0000 UTC m=+84.757061466 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795884 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.795876314 +0000 UTC m=+84.757123347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797844 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797887 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.896963 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897215 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.897169926 +0000 UTC m=+84.858416959 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.897297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.897340 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897501 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897528 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897541 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897554 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897580 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897599 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897603 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.897588458 +0000 UTC m=+84.858835491 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897652 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.897639919 +0000 UTC m=+84.858887132 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900853 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003581 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003662 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003676 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003712 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.047143 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:20:00.586491798 +0000 UTC Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.069123 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.069204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:57 crc kubenswrapper[4979]: E0130 21:40:57.069331 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.069370 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:57 crc kubenswrapper[4979]: E0130 21:40:57.069500 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:57 crc kubenswrapper[4979]: E0130 21:40:57.069628 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.106991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107112 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107159 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209773 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209852 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209880 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209900 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315446 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419222 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419716 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523230 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625685 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625714 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625726 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729300 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.831977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832299 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832574 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936432 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039400 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039415 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039425 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.047739 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:08:13.667236872 +0000 UTC Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.069266 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:58 crc kubenswrapper[4979]: E0130 21:40:58.069752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.142544 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.142836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.142937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.143012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.143107 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245842 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245888 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348408 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348673 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348873 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348947 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.452898 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.452976 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.453001 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.453090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.453114 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557279 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557347 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660837 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660978 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660992 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.866972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867565 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971525 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971552 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971568 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.048553 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:40:57.075236711 +0000 UTC Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.069117 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.069241 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.069341 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:59 crc kubenswrapper[4979]: E0130 21:40:59.069547 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:59 crc kubenswrapper[4979]: E0130 21:40:59.069700 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:59 crc kubenswrapper[4979]: E0130 21:40:59.069831 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073907 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.177016 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.177986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.178107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.178286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.178378 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281104 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281217 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384314 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384396 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487238 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.573012 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.588339 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.590811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.590942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.591001 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.591081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.591139 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.597577 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.622139 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.640834 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.659021 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.671312 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.688262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694363 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694462 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.705482 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.728908 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.751383 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.766103 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.779164 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.795004 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797505 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.811815 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.829438 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.851975 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.866966 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901975 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.902082 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.004939 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.004983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.004994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.005012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.005025 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.049183 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:01:53.444733683 +0000 UTC Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.068843 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:00 crc kubenswrapper[4979]: E0130 21:41:00.069010 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.109011 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211968 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.212002 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316244 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419251 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419283 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522959 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522987 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.627981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628066 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628079 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628112 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628131 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731769 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836468 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.939997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940098 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940123 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940176 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044070 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044150 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044210 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.049484 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:42:57.003470457 +0000 UTC Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.069095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.069128 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.069248 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:01 crc kubenswrapper[4979]: E0130 21:41:01.069425 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:01 crc kubenswrapper[4979]: E0130 21:41:01.069501 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:01 crc kubenswrapper[4979]: E0130 21:41:01.069625 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148947 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.149006 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252230 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252287 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252299 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252334 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354810 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354872 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354881 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457768 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457819 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560386 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560506 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663732 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663751 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663762 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767794 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.768092 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.870881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.870957 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.870970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.871006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.871020 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973840 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973938 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.049945 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:49:10.205645296 +0000 UTC Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.069618 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.069816 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076688 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076733 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181406 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181532 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284829 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284872 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.387983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388094 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388103 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485369 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485388 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.508112 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513456 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.532634 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538643 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538785 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.560067 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564493 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564532 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564550 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.580812 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584932 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584966 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584978 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.585011 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.621237 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.621356 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.622976 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623045 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623058 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725920 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725931 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829282 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829393 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932353 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035157 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035203 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035249 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.051142 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:09:33.504697604 +0000 UTC Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.069232 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:03 crc kubenswrapper[4979]: E0130 21:41:03.069374 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.069232 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.069559 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:03 crc kubenswrapper[4979]: E0130 21:41:03.069583 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:03 crc kubenswrapper[4979]: E0130 21:41:03.069864 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140164 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140214 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140257 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243342 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243381 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243409 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346934 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346953 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449834 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449871 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449880 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449895 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449905 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552815 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552892 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655829 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655902 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759214 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759307 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759324 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861287 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861365 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.964996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965117 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965147 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.051350 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:09:16.228308717 +0000 UTC Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068374 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068437 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:04 crc kubenswrapper[4979]: E0130 21:41:04.068912 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172132 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172192 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172203 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172236 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275842 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275924 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275952 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275970 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379387 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379452 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483820 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483868 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587158 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587203 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691929 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795570 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.898959 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899197 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002080 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.052402 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:47:49.476597184 +0000 UTC Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.071231 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.071357 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:05 crc kubenswrapper[4979]: E0130 21:41:05.071520 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:05 crc kubenswrapper[4979]: E0130 21:41:05.071721 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.071833 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:05 crc kubenswrapper[4979]: E0130 21:41:05.071920 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.087165 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.099742 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105146 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105256 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105378 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105450 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.112237 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.123619 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.137417 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.152990 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.167222 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.182357 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.200635 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208801 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208815 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.212308 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.221644 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.236727 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.250299 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.265496 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.279985 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.298631 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312197 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312258 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.321001 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414937 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517957 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621436 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621468 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724805 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724822 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724849 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724867 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828438 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931715 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931817 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035297 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035404 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035459 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.053113 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:22:26.350813452 +0000 UTC Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.069740 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:06 crc kubenswrapper[4979]: E0130 21:41:06.069950 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139273 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.241962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242033 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242192 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345385 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345435 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448487 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654516 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654581 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654621 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758677 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862515 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862571 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862581 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862603 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862615 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965683 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965703 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965716 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.053672 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:52:15.992600455 +0000 UTC Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.068707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.068813 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:07 crc kubenswrapper[4979]: E0130 21:41:07.068896 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:07 crc kubenswrapper[4979]: E0130 21:41:07.069102 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.069183 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:07 crc kubenswrapper[4979]: E0130 21:41:07.069360 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071301 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.173969 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174014 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174052 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174086 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277628 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277646 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277655 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380343 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483139 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483182 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483217 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587418 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587434 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691124 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691159 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691173 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794364 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794414 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794495 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897262 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897296 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000420 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000544 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.054264 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:15:15.985600763 +0000 UTC Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.069708 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:08 crc kubenswrapper[4979]: E0130 21:41:08.069958 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104355 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.206982 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207078 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207114 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207127 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311447 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311537 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416208 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519217 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519231 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.621915 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.621974 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.621984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.622004 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.622017 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726340 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829490 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829562 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933788 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037950 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.055497 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:41:40.129419538 +0000 UTC Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.069182 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.069365 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.069605 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.069643 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.069711 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.069894 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.070892 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.071370 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141064 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141115 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141154 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245779 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245831 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348931 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348974 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451661 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554643 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554764 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658429 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658500 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658525 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658538 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761807 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761826 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761837 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864648 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968110 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968155 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.055690 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:51:42.34484665 +0000 UTC Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.069298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:10 crc kubenswrapper[4979]: E0130 21:41:10.069462 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071048 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071108 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173258 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173277 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277645 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277678 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.380980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381027 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381056 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381080 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381091 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483807 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586219 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690736 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690826 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794789 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.897922 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.897988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.898002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.898025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.898086 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001806 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001904 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001916 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.056496 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:46:54.355134177 +0000 UTC Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.069279 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:11 crc kubenswrapper[4979]: E0130 21:41:11.069453 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.069604 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:11 crc kubenswrapper[4979]: E0130 21:41:11.069930 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.070028 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:11 crc kubenswrapper[4979]: E0130 21:41:11.070338 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105390 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105413 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.208921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.208973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.208984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.209007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.209019 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312246 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312336 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414702 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414717 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517324 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517388 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517408 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517423 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.619987 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620060 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620104 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722554 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722648 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825815 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825827 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928176 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928253 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032463 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032489 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.057361 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:47:39.279275965 +0000 UTC Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.068692 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.068837 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135480 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238231 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238244 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341419 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444446 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546565 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546584 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546595 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649256 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649313 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649347 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666132 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666221 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.684590 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.685287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.685420 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.685467 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:44.685453252 +0000 UTC m=+100.646700285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688728 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688740 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.703146 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.721102 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725799 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725882 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.743769 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747712 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747758 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747785 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.761993 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.762171 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764103 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764113 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764152 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867718 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969932 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.058210 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 14:07:30.108631439 +0000 UTC Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.069781 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.069878 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:13 crc kubenswrapper[4979]: E0130 21:41:13.069983 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.069796 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:13 crc kubenswrapper[4979]: E0130 21:41:13.070298 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:13 crc kubenswrapper[4979]: E0130 21:41:13.070178 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072282 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072300 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072315 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175523 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278693 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278776 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278817 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382443 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382478 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485423 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485432 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485447 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485457 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588784 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588844 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588873 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.691969 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692050 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692087 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795346 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795436 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897740 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897778 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000949 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.059422 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:53:09.857021026 +0000 UTC Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.068731 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:14 crc kubenswrapper[4979]: E0130 21:41:14.068941 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103905 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103927 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103938 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206696 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206715 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310156 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310209 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412968 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.413022 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516199 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516297 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.535828 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/0.log" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.536277 4979 generic.go:334] "Generic (PLEG): container finished" podID="6722e8df-a635-4808-b6b9-d5633fc3d34b" containerID="553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7" exitCode=1 Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.536326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerDied","Data":"553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.537109 4979 scope.go:117] "RemoveContainer" containerID="553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.552669 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.569361 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.584421 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.602701 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.618248 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.619958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620055 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620100 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.633151 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.649587 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.664253 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.680632 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.695015 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.714330 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722888 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722960 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722994 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.727378 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.740249 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.759700 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.776269 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.791616 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.805788 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826241 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826285 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826314 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929736 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929814 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929826 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.032965 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033312 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033342 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033366 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033395 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.059968 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:46:31.754328736 +0000 UTC Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.069091 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.069140 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.069149 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:15 crc kubenswrapper[4979]: E0130 21:41:15.069251 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:15 crc kubenswrapper[4979]: E0130 21:41:15.069466 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:15 crc kubenswrapper[4979]: E0130 21:41:15.069493 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.085090 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.099637 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.114180 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.127976 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.139918 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141681 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141714 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.152362 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.163967 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.175171 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.193610 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.209217 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.221442 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.236259 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244603 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244645 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.255008 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.271768 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.294197 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.308631 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.322145 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348423 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348524 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348540 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451334 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451368 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.542720 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/0.log" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.542779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554020 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554418 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554515 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554736 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.561981 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.579715 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.595854 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.610174 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.623534 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.633815 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.646730 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658151 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658226 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658243 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658291 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.659805 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.677021 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.693692 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.706640 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.720677 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.733537 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.747806 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760792 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.763141 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.778735 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.798876 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863654 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863695 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863735 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966569 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966640 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.060308 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:21:53.672758142 +0000 UTC Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069071 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069507 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069532 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069544 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: E0130 21:41:16.069754 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172956 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275531 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275576 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275587 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378414 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378431 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481749 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584176 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.686929 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.686973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.686983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.687002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.687014 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.789984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790051 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790082 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790092 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892302 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995514 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995528 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.061857 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:30:50.199419506 +0000 UTC Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.069281 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.069342 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.069374 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:17 crc kubenswrapper[4979]: E0130 21:41:17.069439 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:17 crc kubenswrapper[4979]: E0130 21:41:17.069476 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:17 crc kubenswrapper[4979]: E0130 21:41:17.069541 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098067 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098113 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098149 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098161 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200913 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200946 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303388 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303427 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406738 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406780 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508735 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508755 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508768 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611913 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611953 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714506 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714557 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818226 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818255 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818265 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.920958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921057 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921104 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024556 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.062007 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:57:47.371752162 +0000 UTC Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.068800 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:18 crc kubenswrapper[4979]: E0130 21:41:18.069144 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.085827 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127559 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127606 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127648 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231078 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231150 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231173 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231190 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334364 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334423 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334434 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334466 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438285 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438332 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544588 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544913 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648088 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648168 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648207 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751135 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751240 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751253 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854178 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854208 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956834 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956891 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956917 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956927 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.059953 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.059994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.060002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.060022 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.060049 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.063187 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:46:36.203250222 +0000 UTC Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.069518 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.069598 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.069529 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:19 crc kubenswrapper[4979]: E0130 21:41:19.069650 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:19 crc kubenswrapper[4979]: E0130 21:41:19.069722 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:19 crc kubenswrapper[4979]: E0130 21:41:19.069802 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163459 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163562 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266400 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266412 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266428 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266443 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369690 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472390 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472461 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472506 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472525 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574956 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.575007 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.677981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678172 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678198 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780795 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780865 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780933 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884477 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884525 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.988927 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989114 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989168 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.063411 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:43.9219475 +0000 UTC Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.068959 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:20 crc kubenswrapper[4979]: E0130 21:41:20.069206 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092286 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194939 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194954 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297936 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297961 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401300 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401312 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401333 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401349 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504848 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504905 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504952 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608468 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608482 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711708 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711880 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.814972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815056 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815109 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919299 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919370 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919453 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021643 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.064383 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:54:28.438074626 +0000 UTC Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.068730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.068767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.068837 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:21 crc kubenswrapper[4979]: E0130 21:41:21.068935 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:21 crc kubenswrapper[4979]: E0130 21:41:21.069085 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:21 crc kubenswrapper[4979]: E0130 21:41:21.069512 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.069834 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124952 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.228920 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.228994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.229012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.229069 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.229117 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332909 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332987 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.333004 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435539 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435588 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435638 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538801 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538847 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.566514 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.570415 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.571401 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.595412 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.608873 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.632271 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641091 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641161 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.652791 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.682788 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.705332 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.726211 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744757 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744780 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.762103 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.778292 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.794624 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.810935 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.827747 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.842102 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850330 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850343 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850382 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.866869 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.884090 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.901157 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.914524 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953849 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953861 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953887 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056507 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.065055 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:56:32.466463254 +0000 UTC Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.069414 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:22 crc kubenswrapper[4979]: E0130 21:41:22.069660 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159533 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159552 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159564 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263066 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263148 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263162 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366557 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366657 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366704 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469391 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469443 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469487 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.571992 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572087 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572122 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.576082 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.576683 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.580184 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" exitCode=1 Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.580259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.580354 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.581197 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:22 crc kubenswrapper[4979]: E0130 21:41:22.581390 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.599122 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.612110 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.626948 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.641917 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.660794 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675297 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675359 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675372 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.690625 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.706077 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.722013 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.738524 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.752694 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.766949 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781730 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781793 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.788330 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.801484 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.813356 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.823704 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.839662 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.855957 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.867497 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885964 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988562 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988658 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.065463 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:09:22.620632317 +0000 UTC Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.068933 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.069006 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.069142 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.068947 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.069315 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.069378 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091247 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091313 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122065 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122119 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122129 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122159 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.142433 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145794 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145806 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.163361 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168228 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.184935 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189728 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189968 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.190012 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.204764 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208312 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.221224 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.221408 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223113 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223150 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223195 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325594 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325609 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325620 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428531 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428561 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531837 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531997 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.586416 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.592570 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.592902 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.610395 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.631659 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.636629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637213 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637247 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637281 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637306 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.664131 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.679592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.695747 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.713280 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.731834 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740138 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740199 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740232 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.751579 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.768423 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.783798 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.800153 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.817462 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.835093 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842825 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842889 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.851519 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.868113 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.879995 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.894281 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.909371 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.945971 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.946025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.948213 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.948345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.948374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052482 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052556 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052605 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.066201 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:09:49.838978519 +0000 UTC Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.069651 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:24 crc kubenswrapper[4979]: E0130 21:41:24.069896 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155871 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.156002 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.260995 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261136 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.363952 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364120 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467164 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467190 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570641 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673499 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673549 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777592 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777605 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879761 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.983961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984055 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984091 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.067138 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:14:10.597160908 +0000 UTC Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.069604 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.069705 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.069767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:25 crc kubenswrapper[4979]: E0130 21:41:25.069936 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:25 crc kubenswrapper[4979]: E0130 21:41:25.071357 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:25 crc kubenswrapper[4979]: E0130 21:41:25.071488 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087775 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.090406 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.114281 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.134822 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.153938 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.172625 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.187247 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190270 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190314 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190326 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190357 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.202411 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.216175 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.233310 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.249849 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.261655 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.275438 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.292432 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293416 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293498 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.305687 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.318984 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.347988 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.362224 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.375998 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397339 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397379 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500365 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603960 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.604012 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706435 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706556 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809466 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912635 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912666 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016219 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016306 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016337 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.068110 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:41:23.597023925 +0000 UTC Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.069394 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:26 crc kubenswrapper[4979]: E0130 21:41:26.069579 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120757 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120894 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120918 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223953 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223986 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326505 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326539 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326554 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.429919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430022 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430089 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430110 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532888 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532949 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635493 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635525 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635544 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739669 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739719 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842963 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.843077 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.945946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946074 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.048980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049054 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049086 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069118 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:21:13.3025775 +0000 UTC Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069449 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069522 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069603 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:27 crc kubenswrapper[4979]: E0130 21:41:27.069750 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:27 crc kubenswrapper[4979]: E0130 21:41:27.069849 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:27 crc kubenswrapper[4979]: E0130 21:41:27.070050 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152226 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152240 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254670 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254763 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358219 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358271 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.460961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461053 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461120 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565133 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565249 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565269 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668613 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772726 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772739 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875620 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875681 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875742 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978671 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978850 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.068648 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.068814 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.069700 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:18:22.095360666 +0000 UTC Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082310 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082333 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082359 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082380 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184922 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184941 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184962 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287919 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390782 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390816 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493868 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493889 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493902 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597347 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597358 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700889 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803347 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803406 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.875386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875498 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875583 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.875557979 +0000 UTC m=+148.836805012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.875500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875616 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875746 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.875726134 +0000 UTC m=+148.836973187 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906693 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906707 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906734 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.976155 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976347 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.976315485 +0000 UTC m=+148.937562538 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.976397 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.976456 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976557 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976573 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976584 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976614 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976625 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.976617853 +0000 UTC m=+148.937864886 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976635 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976662 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976707 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.976692595 +0000 UTC m=+148.937939638 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009810 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009867 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009935 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069477 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069595 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069656 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:29 crc kubenswrapper[4979]: E0130 21:41:29.069813 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069871 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:31:24.77673221 +0000 UTC Jan 30 21:41:29 crc kubenswrapper[4979]: E0130 21:41:29.069971 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:29 crc kubenswrapper[4979]: E0130 21:41:29.070102 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113534 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113546 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113604 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.216923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217103 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217160 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321565 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321612 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424443 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424489 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527085 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527162 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527197 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629647 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629657 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732445 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732564 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835598 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835654 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835678 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939686 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939820 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042720 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042760 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.069499 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:30 crc kubenswrapper[4979]: E0130 21:41:30.069692 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.070438 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:04:17.804079997 +0000 UTC Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145266 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145278 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248657 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248716 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248727 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248757 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351406 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351438 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557805 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557959 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661768 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661942 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765301 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765340 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868383 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868398 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971911 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.068848 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:31 crc kubenswrapper[4979]: E0130 21:41:31.069079 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.069283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:31 crc kubenswrapper[4979]: E0130 21:41:31.069411 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.069508 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:31 crc kubenswrapper[4979]: E0130 21:41:31.069691 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.070883 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:38:26.446623235 +0000 UTC Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073965 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073987 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.176904 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.176970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.176988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.177022 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.177074 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280389 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280441 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280464 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384630 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384686 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384713 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488570 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488584 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593914 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593936 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593992 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697596 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800659 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800755 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800869 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.904933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.904981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.904994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.905015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.905049 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008786 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008830 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.069233 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:32 crc kubenswrapper[4979]: E0130 21:41:32.069815 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.071203 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:41:59.811260264 +0000 UTC Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112058 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112119 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112139 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112152 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215351 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215384 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318776 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318789 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.422484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.422998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.423259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.423462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.423611 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527708 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527805 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527857 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630437 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630450 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.732589 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733406 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.837465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.837846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.838265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.838503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.838639 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941718 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045114 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045283 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.069026 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.069152 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.069074 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.069328 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.069440 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.069540 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.072003 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:32:53.950043257 +0000 UTC Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.147990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148514 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148573 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251183 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355282 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355298 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450370 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450398 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450416 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.475732 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483066 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483180 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483218 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.498117 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.503679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.503925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.504013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.504177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.504278 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.520382 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525600 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525711 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525728 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.546658 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551819 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551873 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.571621 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.571757 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574892 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678566 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678710 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782874 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886349 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886399 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988562 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988608 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.069276 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:34 crc kubenswrapper[4979]: E0130 21:41:34.069442 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.073419 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:18.157942087 +0000 UTC Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.091653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.091813 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.091910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.092017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.092164 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195918 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298898 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402782 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402839 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506707 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506727 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506761 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506779 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610678 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610738 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610805 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.714523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.714951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.715110 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.715239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.715355 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818272 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921482 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921559 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921614 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025817 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025928 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025947 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.068876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.068876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.069108 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:35 crc kubenswrapper[4979]: E0130 21:41:35.069262 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:35 crc kubenswrapper[4979]: E0130 21:41:35.069546 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:35 crc kubenswrapper[4979]: E0130 21:41:35.069782 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.073760 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:43:01.876710483 +0000 UTC Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.085988 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.107132 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129600 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129643 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129659 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.140699 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.158561 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.172937 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.192685 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.217113 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233550 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.234375 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.256488 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.276156 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.292466 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.308296 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.320138 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.336752 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337949 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.350390 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.365327 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.384637 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.402481 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440895 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440952 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440984 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544101 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649212 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649254 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752802 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856458 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856566 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856589 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960539 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960586 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960604 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064694 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064712 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.068928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:36 crc kubenswrapper[4979]: E0130 21:41:36.069229 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.074083 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:40:44.591997494 +0000 UTC Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167761 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167779 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271378 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271446 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374084 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374120 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477693 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580896 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684246 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684312 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787619 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890656 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890674 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994294 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994340 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.069815 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.069854 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.069977 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.069999 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.070165 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.070360 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.071060 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.071216 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.074384 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:19:02.473502963 +0000 UTC Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100918 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204860 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204911 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307388 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307477 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307521 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411545 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411571 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515532 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619078 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619159 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619204 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722638 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722718 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722804 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826545 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826568 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826622 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929356 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929381 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929441 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032820 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032927 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.069792 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:38 crc kubenswrapper[4979]: E0130 21:41:38.069982 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.074942 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:07:19.724773686 +0000 UTC Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137810 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.240874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.240938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.240951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.241020 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.241189 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.343990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344106 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344132 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344187 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447959 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.448018 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551854 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654545 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758871 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863499 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863512 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863532 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863545 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966735 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.068805 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.068815 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.069015 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:39 crc kubenswrapper[4979]: E0130 21:41:39.069179 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:39 crc kubenswrapper[4979]: E0130 21:41:39.069269 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:39 crc kubenswrapper[4979]: E0130 21:41:39.069371 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.071000 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.075903 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:15:36.806236322 +0000 UTC Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.174961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.174998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.175010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.175045 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.175060 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278969 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.279004 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383148 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383202 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383249 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487158 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487255 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487368 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590447 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590496 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693234 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693301 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797261 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797272 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902726 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902751 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902819 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007798 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007874 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.068906 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:40 crc kubenswrapper[4979]: E0130 21:41:40.069179 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.076582 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:00:39.642528673 +0000 UTC Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111670 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111758 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111873 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216555 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216583 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216601 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320146 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320172 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320233 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423865 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423891 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423924 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423947 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527789 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527867 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527915 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632604 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632642 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632657 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735844 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735939 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735954 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839799 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.943334 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.943731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.943900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.944096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.944258 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.048698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049175 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049342 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049634 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.069591 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:41 crc kubenswrapper[4979]: E0130 21:41:41.069932 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:41 crc kubenswrapper[4979]: E0130 21:41:41.070099 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:41 crc kubenswrapper[4979]: E0130 21:41:41.070298 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.077242 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:19:57.965638921 +0000 UTC Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152616 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256398 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256488 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359606 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359627 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463515 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463633 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567663 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.568207 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672160 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672230 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672274 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672299 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775798 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775872 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775894 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775911 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880187 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880341 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984139 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984183 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984248 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.068746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:42 crc kubenswrapper[4979]: E0130 21:41:42.068996 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.078123 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:34:59.454444404 +0000 UTC Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087897 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190506 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190565 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190576 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190596 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190608 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294118 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294178 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294190 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396401 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396458 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396472 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396490 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396501 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500404 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500429 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500461 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500486 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603641 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603718 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603735 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603784 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706558 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706576 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706593 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706604 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809916 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913771 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913799 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913818 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019382 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019552 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.069417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.069578 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.069670 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.069743 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.069734 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.069913 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.078978 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:15:24.359424155 +0000 UTC Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.086594 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123438 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123450 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225776 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328740 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328815 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328825 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432604 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432752 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535468 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535628 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535711 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639334 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639382 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639398 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.743905 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745385 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745414 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745434 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848685 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848824 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906511 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906612 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906630 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.928767 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934182 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.955774 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961786 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961872 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961901 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961924 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.983881 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989299 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.011619 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017397 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017429 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017455 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.040772 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.040997 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043217 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043275 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.069106 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.069465 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.079227 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:36:01.442544161 +0000 UTC Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146210 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146291 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146385 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250444 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250861 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354126 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354192 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354210 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354262 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457739 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561379 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561525 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664120 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664275 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.769946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770087 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770118 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.773690 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.774219 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.774820 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:48.774757611 +0000 UTC m=+164.736004684 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873867 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873914 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977428 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977455 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977473 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.069746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.069746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.069889 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:45 crc kubenswrapper[4979]: E0130 21:41:45.070158 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:45 crc kubenswrapper[4979]: E0130 21:41:45.070739 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:45 crc kubenswrapper[4979]: E0130 21:41:45.071132 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079468 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:02:14.323600767 +0000 UTC Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079779 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.092158 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.111743 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.126640 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.143340 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.162383 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.183945 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186730 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186897 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.226156 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-75j89" podStartSLOduration=80.226137419 podStartE2EDuration="1m20.226137419s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.225651765 +0000 UTC m=+101.186898818" watchObservedRunningTime="2026-01-30 21:41:45.226137419 +0000 UTC m=+101.187384452" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.246500 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xh5mg" podStartSLOduration=80.246472958 podStartE2EDuration="1m20.246472958s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.246334185 +0000 UTC m=+101.207581248" watchObservedRunningTime="2026-01-30 21:41:45.246472958 +0000 UTC m=+101.207719991" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327289 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.392376 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" podStartSLOduration=80.392351958 podStartE2EDuration="1m20.392351958s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.376391108 +0000 UTC m=+101.337638141" watchObservedRunningTime="2026-01-30 21:41:45.392351958 +0000 UTC m=+101.353598991" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.414190 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.414149199 podStartE2EDuration="27.414149199s" podCreationTimestamp="2026-01-30 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.413601813 +0000 UTC m=+101.374848846" watchObservedRunningTime="2026-01-30 21:41:45.414149199 +0000 UTC m=+101.375396252" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430019 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430120 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430134 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.489953 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.489929247 podStartE2EDuration="46.489929247s" podCreationTimestamp="2026-01-30 21:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.489284588 +0000 UTC m=+101.450531621" watchObservedRunningTime="2026-01-30 21:41:45.489929247 +0000 UTC m=+101.451176280" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.517608 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.5175817289999998 podStartE2EDuration="2.517581729s" podCreationTimestamp="2026-01-30 21:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.511859961 +0000 UTC m=+101.473107004" watchObservedRunningTime="2026-01-30 21:41:45.517581729 +0000 UTC m=+101.478828762" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532638 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532678 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532692 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.536523 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.53650411 podStartE2EDuration="1m20.53650411s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.536201552 +0000 UTC m=+101.497448585" watchObservedRunningTime="2026-01-30 21:41:45.53650411 +0000 UTC m=+101.497751143" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635524 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635583 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635639 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738333 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738377 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842266 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842408 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842431 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946371 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946400 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049507 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049720 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049769 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.072266 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:46 crc kubenswrapper[4979]: E0130 21:41:46.073536 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.079835 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:41:02.303666833 +0000 UTC Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153382 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153430 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257343 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257518 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359995 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463793 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463826 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463847 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567612 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567634 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.670993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671243 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.773948 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.773984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.773997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.774015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.774026 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876895 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979886 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.069536 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.069560 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.069785 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:47 crc kubenswrapper[4979]: E0130 21:41:47.069904 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:47 crc kubenswrapper[4979]: E0130 21:41:47.070009 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:47 crc kubenswrapper[4979]: E0130 21:41:47.070178 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.080022 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:21:12.610387454 +0000 UTC Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081401 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081455 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081497 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184092 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184171 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287109 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287135 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395279 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499378 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499431 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602886 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602928 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602955 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602967 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705267 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705363 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.808989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809070 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809103 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912357 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.069501 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:48 crc kubenswrapper[4979]: E0130 21:41:48.069779 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.080620 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:38:11.027743363 +0000 UTC Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118310 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118345 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221118 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221153 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323462 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426685 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426783 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426795 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529783 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529807 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529858 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633142 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633266 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633411 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.736903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737085 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737150 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.839973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840050 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840068 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840079 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943540 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943584 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943593 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943625 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.046999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047119 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047144 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047163 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.069523 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.069521 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:49 crc kubenswrapper[4979]: E0130 21:41:49.069825 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.069557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:49 crc kubenswrapper[4979]: E0130 21:41:49.069938 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:49 crc kubenswrapper[4979]: E0130 21:41:49.070154 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.080984 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:21:30.442129193 +0000 UTC Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149873 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149975 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253620 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253722 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.356988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357126 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459686 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459696 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563635 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563686 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666500 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666541 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769455 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769561 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769644 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769666 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872362 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975278 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975470 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.069703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:50 crc kubenswrapper[4979]: E0130 21:41:50.069941 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.071268 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:50 crc kubenswrapper[4979]: E0130 21:41:50.071669 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079596 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079663 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079688 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079741 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.081858 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:49:31.922886446 +0000 UTC Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182635 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182684 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182732 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286358 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286545 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286592 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390202 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390268 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390337 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493162 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493173 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493201 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596886 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596898 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596931 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699760 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699924 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803518 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803630 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803660 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907592 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907632 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010720 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010797 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.068870 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.068970 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:51 crc kubenswrapper[4979]: E0130 21:41:51.069062 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:51 crc kubenswrapper[4979]: E0130 21:41:51.069153 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.069264 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:51 crc kubenswrapper[4979]: E0130 21:41:51.069331 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.082709 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:18:40.112510908 +0000 UTC Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.114903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.114966 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.114981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.115000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.115012 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217725 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217761 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321262 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321306 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.423938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424060 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424091 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424113 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632777 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735427 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839043 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839104 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839157 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942760 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942827 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942854 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046407 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046482 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.069171 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:52 crc kubenswrapper[4979]: E0130 21:41:52.069406 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.083329 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:03:42.092976458 +0000 UTC Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149683 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149776 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149867 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.253954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254069 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254095 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254124 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254146 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357146 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357201 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357245 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460277 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460330 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460351 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563726 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563784 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563825 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563846 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667665 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771895 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771940 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.874670 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875398 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979008 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979111 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.069136 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.069162 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.069384 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:53 crc kubenswrapper[4979]: E0130 21:41:53.069618 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:53 crc kubenswrapper[4979]: E0130 21:41:53.069756 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:53 crc kubenswrapper[4979]: E0130 21:41:53.069850 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081562 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081610 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081645 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081659 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.083759 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:07:42.489673239 +0000 UTC Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185331 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.288746 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289261 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289274 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392432 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392598 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496256 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599409 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599442 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702603 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702628 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702666 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702690 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806235 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909822 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909833 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013490 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:54Z","lastTransitionTime":"2026-01-30T21:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.069437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:54 crc kubenswrapper[4979]: E0130 21:41:54.069671 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.084892 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:43:31.027791473 +0000 UTC Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103768 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103822 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:54Z","lastTransitionTime":"2026-01-30T21:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127123 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127169 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:54Z","lastTransitionTime":"2026-01-30T21:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.160275 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849"] Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.160807 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166387 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166521 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166692 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166903 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.220357 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p8nz9" podStartSLOduration=90.220314559 podStartE2EDuration="1m30.220314559s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.216062472 +0000 UTC m=+110.177309515" watchObservedRunningTime="2026-01-30 21:41:54.220314559 +0000 UTC m=+110.181561602" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.245392 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-f2xld" podStartSLOduration=90.245303868 podStartE2EDuration="1m30.245303868s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.245198485 +0000 UTC m=+110.206445528" watchObservedRunningTime="2026-01-30 21:41:54.245303868 +0000 UTC m=+110.206550941" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.282007 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.281791834 podStartE2EDuration="1m28.281791834s" podCreationTimestamp="2026-01-30 21:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.265826524 +0000 UTC m=+110.227073567" watchObservedRunningTime="2026-01-30 21:41:54.281791834 +0000 UTC m=+110.243038877" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290332 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290401 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290434 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290702 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392601 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392681 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392753 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392826 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.393338 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.395253 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.403231 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.413580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.487695 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: W0130 21:41:54.511121 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e08d4ed_5213_4b6a_bd78_92e91b0ba9fb.slice/crio-ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98 WatchSource:0}: Error finding container ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98: Status 404 returned error can't find the container with id ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98 Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.713326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" event={"ID":"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb","Type":"ContainerStarted","Data":"ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98"} Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.069170 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:55 crc kubenswrapper[4979]: E0130 21:41:55.070572 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.070624 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:55 crc kubenswrapper[4979]: E0130 21:41:55.070691 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.070718 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:55 crc kubenswrapper[4979]: E0130 21:41:55.071055 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.086019 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:48:06.741126457 +0000 UTC Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.086158 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.095508 4979 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.717918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" event={"ID":"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb","Type":"ContainerStarted","Data":"a945d26a981ee5d17f006cb8154b7d0921bb51266655d9527d1bc642d04c0f4f"} Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.733083 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podStartSLOduration=91.733062321 podStartE2EDuration="1m31.733062321s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.28417707 +0000 UTC m=+110.245424153" watchObservedRunningTime="2026-01-30 21:41:55.733062321 +0000 UTC m=+111.694309354" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.733995 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" podStartSLOduration=90.733988436 podStartE2EDuration="1m30.733988436s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:55.732712831 +0000 UTC m=+111.693959874" watchObservedRunningTime="2026-01-30 21:41:55.733988436 +0000 UTC m=+111.695235459" Jan 30 21:41:56 crc kubenswrapper[4979]: I0130 21:41:56.069328 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:56 crc kubenswrapper[4979]: E0130 21:41:56.069516 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:57 crc kubenswrapper[4979]: I0130 21:41:57.069187 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:57 crc kubenswrapper[4979]: I0130 21:41:57.069261 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:57 crc kubenswrapper[4979]: I0130 21:41:57.069478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:57 crc kubenswrapper[4979]: E0130 21:41:57.069880 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:57 crc kubenswrapper[4979]: E0130 21:41:57.070107 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:57 crc kubenswrapper[4979]: E0130 21:41:57.070203 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:58 crc kubenswrapper[4979]: I0130 21:41:58.068977 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:58 crc kubenswrapper[4979]: E0130 21:41:58.069175 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:59 crc kubenswrapper[4979]: I0130 21:41:59.069217 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:59 crc kubenswrapper[4979]: I0130 21:41:59.069289 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:59 crc kubenswrapper[4979]: E0130 21:41:59.069392 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:59 crc kubenswrapper[4979]: I0130 21:41:59.069486 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:59 crc kubenswrapper[4979]: E0130 21:41:59.069542 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:59 crc kubenswrapper[4979]: E0130 21:41:59.069738 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.069224 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:00 crc kubenswrapper[4979]: E0130 21:42:00.069793 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.735330 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736154 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/0.log" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736253 4979 generic.go:334] "Generic (PLEG): container finished" podID="6722e8df-a635-4808-b6b9-d5633fc3d34b" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" exitCode=1 Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerDied","Data":"94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5"} Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736379 4979 scope.go:117] "RemoveContainer" containerID="553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.738021 4979 scope.go:117] "RemoveContainer" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" Jan 30 21:42:00 crc kubenswrapper[4979]: E0130 21:42:00.738453 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xh5mg_openshift-multus(6722e8df-a635-4808-b6b9-d5633fc3d34b)\"" pod="openshift-multus/multus-xh5mg" podUID="6722e8df-a635-4808-b6b9-d5633fc3d34b" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.068965 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.069002 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:01 crc kubenswrapper[4979]: E0130 21:42:01.069201 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:01 crc kubenswrapper[4979]: E0130 21:42:01.069320 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.069142 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:01 crc kubenswrapper[4979]: E0130 21:42:01.069469 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.741629 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:42:02 crc kubenswrapper[4979]: I0130 21:42:02.068925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:02 crc kubenswrapper[4979]: E0130 21:42:02.069377 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:02 crc kubenswrapper[4979]: I0130 21:42:02.069563 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:42:02 crc kubenswrapper[4979]: E0130 21:42:02.069752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:42:03 crc kubenswrapper[4979]: I0130 21:42:03.069115 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:03 crc kubenswrapper[4979]: E0130 21:42:03.069326 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:03 crc kubenswrapper[4979]: I0130 21:42:03.069447 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:03 crc kubenswrapper[4979]: I0130 21:42:03.069703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:03 crc kubenswrapper[4979]: E0130 21:42:03.069863 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:03 crc kubenswrapper[4979]: E0130 21:42:03.070156 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:04 crc kubenswrapper[4979]: I0130 21:42:04.068735 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:04 crc kubenswrapper[4979]: E0130 21:42:04.068964 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:05 crc kubenswrapper[4979]: I0130 21:42:05.069205 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:05 crc kubenswrapper[4979]: I0130 21:42:05.069211 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:05 crc kubenswrapper[4979]: I0130 21:42:05.070531 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.070528 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.070634 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.070710 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.105718 4979 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.184824 4979 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:42:06 crc kubenswrapper[4979]: I0130 21:42:06.069680 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:06 crc kubenswrapper[4979]: E0130 21:42:06.070004 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:07 crc kubenswrapper[4979]: I0130 21:42:07.069513 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:07 crc kubenswrapper[4979]: I0130 21:42:07.069513 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:07 crc kubenswrapper[4979]: E0130 21:42:07.069755 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:07 crc kubenswrapper[4979]: I0130 21:42:07.069566 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:07 crc kubenswrapper[4979]: E0130 21:42:07.069916 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:07 crc kubenswrapper[4979]: E0130 21:42:07.070179 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:08 crc kubenswrapper[4979]: I0130 21:42:08.068682 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:08 crc kubenswrapper[4979]: E0130 21:42:08.068895 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:09 crc kubenswrapper[4979]: I0130 21:42:09.069425 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:09 crc kubenswrapper[4979]: I0130 21:42:09.069541 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:09 crc kubenswrapper[4979]: E0130 21:42:09.069635 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:09 crc kubenswrapper[4979]: E0130 21:42:09.069939 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:09 crc kubenswrapper[4979]: I0130 21:42:09.069749 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:09 crc kubenswrapper[4979]: E0130 21:42:09.070159 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:10 crc kubenswrapper[4979]: I0130 21:42:10.069563 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:10 crc kubenswrapper[4979]: E0130 21:42:10.069735 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:10 crc kubenswrapper[4979]: E0130 21:42:10.186678 4979 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:42:11 crc kubenswrapper[4979]: I0130 21:42:11.069327 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:11 crc kubenswrapper[4979]: I0130 21:42:11.069327 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:11 crc kubenswrapper[4979]: E0130 21:42:11.069510 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:11 crc kubenswrapper[4979]: E0130 21:42:11.069545 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:11 crc kubenswrapper[4979]: I0130 21:42:11.069335 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:11 crc kubenswrapper[4979]: E0130 21:42:11.069641 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.069292 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.069834 4979 scope.go:117] "RemoveContainer" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" Jan 30 21:42:12 crc kubenswrapper[4979]: E0130 21:42:12.069768 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.788922 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.789071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0"} Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.069219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.069414 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.069711 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:13 crc kubenswrapper[4979]: E0130 21:42:13.069844 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:13 crc kubenswrapper[4979]: E0130 21:42:13.069947 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:13 crc kubenswrapper[4979]: E0130 21:42:13.069992 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.070353 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.795757 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.798727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.799293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.835261 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podStartSLOduration=108.835238155 podStartE2EDuration="1m48.835238155s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:13.834010602 +0000 UTC m=+129.795257635" watchObservedRunningTime="2026-01-30 21:42:13.835238155 +0000 UTC m=+129.796485188" Jan 30 21:42:14 crc kubenswrapper[4979]: I0130 21:42:14.068918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:14 crc kubenswrapper[4979]: E0130 21:42:14.069117 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:14 crc kubenswrapper[4979]: I0130 21:42:14.313880 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pk47q"] Jan 30 21:42:14 crc kubenswrapper[4979]: I0130 21:42:14.802485 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:14 crc kubenswrapper[4979]: E0130 21:42:14.802634 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:15 crc kubenswrapper[4979]: I0130 21:42:15.069644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:15 crc kubenswrapper[4979]: I0130 21:42:15.069672 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.070906 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:15 crc kubenswrapper[4979]: I0130 21:42:15.070948 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.071142 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.071185 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.187762 4979 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:42:16 crc kubenswrapper[4979]: I0130 21:42:16.069293 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:16 crc kubenswrapper[4979]: E0130 21:42:16.069628 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:17 crc kubenswrapper[4979]: I0130 21:42:17.069437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:17 crc kubenswrapper[4979]: I0130 21:42:17.069502 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:17 crc kubenswrapper[4979]: I0130 21:42:17.069631 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:17 crc kubenswrapper[4979]: E0130 21:42:17.069772 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:17 crc kubenswrapper[4979]: E0130 21:42:17.069930 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:17 crc kubenswrapper[4979]: E0130 21:42:17.070060 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:18 crc kubenswrapper[4979]: I0130 21:42:18.069284 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:18 crc kubenswrapper[4979]: E0130 21:42:18.070025 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:19 crc kubenswrapper[4979]: I0130 21:42:19.069639 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:19 crc kubenswrapper[4979]: I0130 21:42:19.069707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:19 crc kubenswrapper[4979]: I0130 21:42:19.069663 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:19 crc kubenswrapper[4979]: E0130 21:42:19.069977 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:19 crc kubenswrapper[4979]: E0130 21:42:19.070194 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:19 crc kubenswrapper[4979]: E0130 21:42:19.070327 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:20 crc kubenswrapper[4979]: I0130 21:42:20.069221 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:20 crc kubenswrapper[4979]: E0130 21:42:20.069413 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.070287 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.070435 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.070925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.073464 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.073684 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.073805 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.074953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:42:22 crc kubenswrapper[4979]: I0130 21:42:22.069618 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:22 crc kubenswrapper[4979]: I0130 21:42:22.071975 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:42:22 crc kubenswrapper[4979]: I0130 21:42:22.072022 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.655999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.702134 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tdvvn"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.702984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.703385 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.703437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.703499 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.704436 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.704556 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mr5l2"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.705331 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.707929 4979 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.708000 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.708093 4979 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.708113 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.712114 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.716095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.716724 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hwb2t"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.717904 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.733335 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.737170 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.738168 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l44fm"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.738636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.739260 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.744804 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.745554 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.748877 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749308 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749526 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749654 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749769 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749950 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750121 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750502 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750860 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750959 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751071 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750863 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffscn"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751248 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751359 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.752819 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.753778 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.753962 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.758462 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.758525 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.758575 4979 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: secrets "console-operator-dockercfg-4xjcr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.760198 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-operator-dockercfg-4xjcr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.763314 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.763657 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.767525 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.768070 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.768899 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.769437 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.769682 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.769838 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.770020 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.770341 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.774664 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783513 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783543 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tdvvn"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778634 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7616472e-472c-4dfa-bf69-97d784e1e42f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783625 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp6s\" (UniqueName: \"kubernetes.io/projected/7616472e-472c-4dfa-bf69-97d784e1e42f-kube-api-access-2cp6s\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783648 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt57s\" (UniqueName: \"kubernetes.io/projected/c38d45aa-0713-4059-8c2d-59a9b1cb5861-kube-api-access-vt57s\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783681 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-encryption-config\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783700 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783723 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-node-pullsecrets\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit-dir\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783784 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmbg\" (UniqueName: \"kubernetes.io/projected/9e86ea88-60d1-4af7-8095-5ee44e176029-kube-api-access-ghmbg\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783819 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783837 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2vgz\" (UniqueName: \"kubernetes.io/projected/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-kube-api-access-v2vgz\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783857 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b609710f-4a90-417e-9e31-b1a045c1e8a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783881 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-serving-cert\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b609710f-4a90-417e-9e31-b1a045c1e8a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783926 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-client\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783950 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-dir\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783966 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783984 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-auth-proxy-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwf2j\" (UniqueName: \"kubernetes.io/projected/b609710f-4a90-417e-9e31-b1a045c1e8a2-kube-api-access-dwf2j\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784052 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784081 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-serving-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784132 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-policies\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-config\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784223 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-image-import-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-encryption-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-images\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784303 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784318 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7v7\" (UniqueName: \"kubernetes.io/projected/21b53e08-d25e-41ab-a180-4b852eb77c8c-kube-api-access-4b7v7\") pod \"downloads-7954f5f757-hwb2t\" (UID: \"21b53e08-d25e-41ab-a180-4b852eb77c8c\") " pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784333 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-serving-cert\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784354 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5fj\" (UniqueName: \"kubernetes.io/projected/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-kube-api-access-lk5fj\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784376 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784390 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-client\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784407 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-trusted-ca\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784420 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784449 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c38d45aa-0713-4059-8c2d-59a9b1cb5861-machine-approver-tls\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.777104 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.777293 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.785953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.786336 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.786617 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.786797 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.786899 4979 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: configmaps "console-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.786950 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.787259 4979 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.787397 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787650 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787740 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787805 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787921 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.788024 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.788251 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778149 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.788121 4979 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: configmaps "oauth-serving-cert" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.788685 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"oauth-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778264 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.788906 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.779233 4979 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789209 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.789225 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.779892 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789498 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.789648 4979 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.789702 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789140 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778214 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.789951 4979 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: secrets "config-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.789982 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"config-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789397 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790173 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790263 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790185 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789443 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.789870 4979 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: secrets "openshift-config-operator-dockercfg-7pc5z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.790443 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-config-operator-dockercfg-7pc5z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790498 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.795414 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.796246 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mr5l2"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.805798 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.806248 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.806453 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.806643 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.807710 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ww6sg"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.808371 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.822735 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.839951 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.840457 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.840961 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.849045 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.849161 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.853793 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.855206 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.855597 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.855859 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.856472 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.856861 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857017 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857198 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857388 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857501 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857648 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857770 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.858010 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.865282 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.865507 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.865853 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.870432 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.870945 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.871194 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.871200 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.871381 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.872081 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.872098 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.885003 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886011 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5fj\" (UniqueName: \"kubernetes.io/projected/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-kube-api-access-lk5fj\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886123 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/814afa6a-716d-4011-89f9-6ccbc336e361-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886164 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886198 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26cfd7ef-1024-479e-bdc5-e39429a16ee5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886237 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886283 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-client\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-trusted-ca\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886389 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886392 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886417 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886446 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c38d45aa-0713-4059-8c2d-59a9b1cb5861-machine-approver-tls\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886530 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886597 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d768fc5d-52c2-4901-a7cd-759d26f88251-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886624 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7616472e-472c-4dfa-bf69-97d784e1e42f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886723 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886757 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg56\" (UniqueName: \"kubernetes.io/projected/d768fc5d-52c2-4901-a7cd-759d26f88251-kube-api-access-5rg56\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886786 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xm4\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-kube-api-access-q2xm4\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886820 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp6s\" (UniqueName: \"kubernetes.io/projected/7616472e-472c-4dfa-bf69-97d784e1e42f-kube-api-access-2cp6s\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886854 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt57s\" (UniqueName: \"kubernetes.io/projected/c38d45aa-0713-4059-8c2d-59a9b1cb5861-kube-api-access-vt57s\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886896 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-encryption-config\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886931 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886994 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-node-pullsecrets\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887016 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit-dir\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887063 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887113 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887143 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887175 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887203 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmbg\" (UniqueName: \"kubernetes.io/projected/9e86ea88-60d1-4af7-8095-5ee44e176029-kube-api-access-ghmbg\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887258 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887288 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2vgz\" (UniqueName: \"kubernetes.io/projected/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-kube-api-access-v2vgz\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887357 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b609710f-4a90-417e-9e31-b1a045c1e8a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895262 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895320 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-serving-cert\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895357 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814afa6a-716d-4011-89f9-6ccbc336e361-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895399 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b609710f-4a90-417e-9e31-b1a045c1e8a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895464 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895503 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-client\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895529 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895557 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895583 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-dir\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895607 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-auth-proxy-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895658 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895687 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqmg\" (UniqueName: \"kubernetes.io/projected/4d2da2c2-6056-4902-a20b-19333d24a600-kube-api-access-bvqmg\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895715 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895717 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwf2j\" (UniqueName: \"kubernetes.io/projected/b609710f-4a90-417e-9e31-b1a045c1e8a2-kube-api-access-dwf2j\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.896685 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.896737 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.908009 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.908114 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.908593 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.910408 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-auth-proxy-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.910665 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.913209 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hgm9w"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.913809 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.914268 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.914586 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916509 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-serving-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2da2c2-6056-4902-a20b-19333d24a600-metrics-tls\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916773 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916873 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cx5c\" (UniqueName: \"kubernetes.io/projected/26cfd7ef-1024-479e-bdc5-e39429a16ee5-kube-api-access-4cx5c\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-policies\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917170 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-config\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917269 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917352 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-image-import-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-encryption-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917626 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.918117 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.918636 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-serving-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.918894 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-serving-cert\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.919237 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-policies\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.919718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.919820 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7616472e-472c-4dfa-bf69-97d784e1e42f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917232 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920191 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b609710f-4a90-417e-9e31-b1a045c1e8a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917395 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917622 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-dir\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920518 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-images\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920549 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920594 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7v7\" (UniqueName: \"kubernetes.io/projected/21b53e08-d25e-41ab-a180-4b852eb77c8c-kube-api-access-4b7v7\") pod \"downloads-7954f5f757-hwb2t\" (UID: \"21b53e08-d25e-41ab-a180-4b852eb77c8c\") " pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920635 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-serving-cert\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917449 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920887 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-node-pullsecrets\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920959 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit-dir\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920830 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-config\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917483 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.921507 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-image-import-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.922160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-images\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923018 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923272 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923362 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-client\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923774 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924106 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924282 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-client\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924676 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.925199 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.925930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.926685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-serving-cert\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.927339 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.927607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.929325 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.929377 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.929402 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.930862 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-trsfj"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.931209 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.931469 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.931960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.932087 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.942892 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.943747 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.945600 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.960835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-encryption-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.961184 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c38d45aa-0713-4059-8c2d-59a9b1cb5861-machine-approver-tls\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.964564 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-trusted-ca\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.964910 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b609710f-4a90-417e-9e31-b1a045c1e8a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.966643 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.966775 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.967706 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.973119 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-encryption-config\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.973571 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.975512 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.982764 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.998081 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.003223 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.008504 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.010895 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.011180 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.012224 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.013329 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwf2j\" (UniqueName: \"kubernetes.io/projected/b609710f-4a90-417e-9e31-b1a045c1e8a2-kube-api-access-dwf2j\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.013427 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.013873 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.014801 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.014901 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.014922 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.015918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.018665 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.019529 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028498 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028944 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814afa6a-716d-4011-89f9-6ccbc336e361-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028973 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029001 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029022 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029087 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqmg\" (UniqueName: \"kubernetes.io/projected/4d2da2c2-6056-4902-a20b-19333d24a600-kube-api-access-bvqmg\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029121 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029158 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029201 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2da2c2-6056-4902-a20b-19333d24a600-metrics-tls\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029218 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cx5c\" (UniqueName: \"kubernetes.io/projected/26cfd7ef-1024-479e-bdc5-e39429a16ee5-kube-api-access-4cx5c\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029259 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029357 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/814afa6a-716d-4011-89f9-6ccbc336e361-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029396 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26cfd7ef-1024-479e-bdc5-e39429a16ee5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029418 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d768fc5d-52c2-4901-a7cd-759d26f88251-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029520 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029556 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rg56\" (UniqueName: \"kubernetes.io/projected/d768fc5d-52c2-4901-a7cd-759d26f88251-kube-api-access-5rg56\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029575 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xm4\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-kube-api-access-q2xm4\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029620 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029646 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029670 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029702 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029728 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.031823 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814afa6a-716d-4011-89f9-6ccbc336e361-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.033227 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035047 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035683 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035814 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035827 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.036647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.037135 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.037850 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d768fc5d-52c2-4901-a7cd-759d26f88251-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.037945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.038098 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.039188 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.039720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.040208 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.040424 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjfp6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.041239 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.042699 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.042715 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.043158 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.043665 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.044279 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045450 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26cfd7ef-1024-479e-bdc5-e39429a16ee5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045761 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045839 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045852 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/814afa6a-716d-4011-89f9-6ccbc336e361-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.046144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.046553 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.046880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.047656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.048322 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.048945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.054822 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l44fm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.054872 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.054889 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-969ns"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.055578 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.055709 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056057 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056086 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056637 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056793 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.057248 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.058646 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.060690 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2da2c2-6056-4902-a20b-19333d24a600-metrics-tls\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.061684 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.063334 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.065016 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ww6sg"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.069767 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.076212 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.076258 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hwb2t"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.076917 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-trsfj"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.078235 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.080175 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lbd69"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.083981 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2zdrx"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.084140 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.085320 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.085367 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.085471 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.086241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffscn"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.092043 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.092110 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.094723 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tbr4j"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.096261 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.105087 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-464m7"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.105193 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.105321 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.106135 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.106455 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.106724 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.107664 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.109485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.113294 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjfp6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.114585 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.116064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.122518 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.124545 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.126271 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-969ns"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.127378 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.128228 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lbd69"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.129708 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.131813 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.133585 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-464m7"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.135048 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tbr4j"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.136732 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.137947 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.139118 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.140158 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.145693 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.165637 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.188288 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.207176 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.230476 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.267472 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5fj\" (UniqueName: \"kubernetes.io/projected/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-kube-api-access-lk5fj\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.286656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt57s\" (UniqueName: \"kubernetes.io/projected/c38d45aa-0713-4059-8c2d-59a9b1cb5861-kube-api-access-vt57s\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.307460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp6s\" (UniqueName: \"kubernetes.io/projected/7616472e-472c-4dfa-bf69-97d784e1e42f-kube-api-access-2cp6s\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.338345 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.346172 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2vgz\" (UniqueName: \"kubernetes.io/projected/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-kube-api-access-v2vgz\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.356120 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.361584 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmbg\" (UniqueName: \"kubernetes.io/projected/9e86ea88-60d1-4af7-8095-5ee44e176029-kube-api-access-ghmbg\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.364796 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.376916 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.386177 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.388862 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7v7\" (UniqueName: \"kubernetes.io/projected/21b53e08-d25e-41ab-a180-4b852eb77c8c-kube-api-access-4b7v7\") pod \"downloads-7954f5f757-hwb2t\" (UID: \"21b53e08-d25e-41ab-a180-4b852eb77c8c\") " pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.394656 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.408013 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.429232 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.442137 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.453538 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.468410 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.488756 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.511671 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.516363 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.528138 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.546589 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.551503 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38d45aa_0713_4059_8c2d_59a9b1cb5861.slice/crio-83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128 WatchSource:0}: Error finding container 83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128: Status 404 returned error can't find the container with id 83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.566140 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.587240 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.598584 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tdvvn"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.605690 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.625645 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.636395 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mr5l2"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.646971 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.662407 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7616472e_472c_4dfa_bf69_97d784e1e42f.slice/crio-b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1 WatchSource:0}: Error finding container b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1: Status 404 returned error can't find the container with id b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.669073 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.669519 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.685851 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e86ea88_60d1_4af7_8095_5ee44e176029.slice/crio-10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6 WatchSource:0}: Error finding container 10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6: Status 404 returned error can't find the container with id 10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.686128 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.698885 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hwb2t"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.707625 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.710706 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b53e08_d25e_41ab_a180_4b852eb77c8c.slice/crio-a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031 WatchSource:0}: Error finding container a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031: Status 404 returned error can't find the container with id a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.741774 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-config\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.741859 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-serving-cert\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.741953 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742012 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742086 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742160 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742197 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742293 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742314 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742331 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742381 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcrm\" (UniqueName: \"kubernetes.io/projected/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-kube-api-access-6tcrm\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.745548 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.245521147 +0000 UTC m=+142.206768180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.765861 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.786767 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.806924 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.830375 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-stats-auth\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844873 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvnk\" (UniqueName: \"kubernetes.io/projected/7a7b036f-4e32-47e9-b700-da7ef3615e4f-kube-api-access-xvvnk\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844903 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-tmpfs\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844921 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpkp\" (UniqueName: \"kubernetes.io/projected/ed73bac2-f781-4475-b265-8c8820d10e3b-kube-api-access-ngpkp\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844958 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf74x\" (UniqueName: \"kubernetes.io/projected/5ec159e5-6cc8-4130-a83c-ad402c63e175-kube-api-access-lf74x\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845015 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-config\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845170 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-serving-cert\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845195 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-certs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845227 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df702c9e-2d17-476e-9bbe-d41784bf809b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845264 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845285 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845304 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ad194c8-35db-4a68-9c59-575a8971d714-signing-key\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845332 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845354 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-srv-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845374 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845392 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-metrics-certs\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845411 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2063d8fc-0614-40e7-be84-ebfbda9acd89-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845451 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxdm\" (UniqueName: \"kubernetes.io/projected/38abc107-38ba-4e77-b00f-eece6eb28537-kube-api-access-6dxdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845469 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed73bac2-f781-4475-b265-8c8820d10e3b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-config\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845550 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-proxy-tls\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845567 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-srv-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845586 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-client\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845619 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845639 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed73bac2-f781-4475-b265-8c8820d10e3b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ad194c8-35db-4a68-9c59-575a8971d714-signing-cabundle\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845700 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df702c9e-2d17-476e-9bbe-d41784bf809b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845736 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcrt\" (UniqueName: \"kubernetes.io/projected/241b3d1c-56ec-4088-bcfa-bea0aecea050-kube-api-access-llcrt\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845787 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2063d8fc-0614-40e7-be84-ebfbda9acd89-config\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845805 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f0c12f1-c780-4020-921b-11e410503db3-proxy-tls\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845822 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6wl\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-kube-api-access-jg6wl\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845857 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rcg\" (UniqueName: \"kubernetes.io/projected/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-kube-api-access-s5rcg\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845878 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-registration-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7638c8d5-0616-4612-9d15-7594e4f74184-serving-cert\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845926 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcrm\" (UniqueName: \"kubernetes.io/projected/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-kube-api-access-6tcrm\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9svw7\" (UniqueName: \"kubernetes.io/projected/4f0c12f1-c780-4020-921b-11e410503db3-kube-api-access-9svw7\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845959 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec159e5-6cc8-4130-a83c-ad402c63e175-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845983 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846000 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846016 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846138 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jwr\" (UniqueName: \"kubernetes.io/projected/4334e640-e3c2-4238-b7da-85e73bda80af-kube-api-access-v2jwr\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-socket-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-profile-collector-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846314 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndnb\" (UniqueName: \"kubernetes.io/projected/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-kube-api-access-kndnb\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846356 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dda3a423-1b53-4e85-9ef1-123fe54ceb98-metrics-tls\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846375 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846413 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df702c9e-2d17-476e-9bbe-d41784bf809b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847220 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-mountpoint-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847286 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5s2\" (UniqueName: \"kubernetes.io/projected/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-kube-api-access-lq5s2\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847308 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpx2n\" (UniqueName: \"kubernetes.io/projected/7ad194c8-35db-4a68-9c59-575a8971d714-kube-api-access-xpx2n\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.847370 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.347332944 +0000 UTC m=+142.308580147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847430 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7638c8d5-0616-4612-9d15-7594e4f74184-config\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847604 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847640 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7j5\" (UniqueName: \"kubernetes.io/projected/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-kube-api-access-6h7j5\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847672 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f0c12f1-c780-4020-921b-11e410503db3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847699 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7b036f-4e32-47e9-b700-da7ef3615e4f-cert\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847773 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2063d8fc-0614-40e7-be84-ebfbda9acd89-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847800 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-config\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847825 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda3a423-1b53-4e85-9ef1-123fe54ceb98-trusted-ca\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847858 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847885 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-csi-data-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-apiservice-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847957 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-default-certificate\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847990 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8mn\" (UniqueName: \"kubernetes.io/projected/e7334e56-32c0-40f4-b60d-afab26024b6a-kube-api-access-fr8mn\") pod \"migrator-59844c95c7-s86jb\" (UID: \"e7334e56-32c0-40f4-b60d-afab26024b6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848019 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848071 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848094 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-serving-cert\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848121 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gt4\" (UniqueName: \"kubernetes.io/projected/7638c8d5-0616-4612-9d15-7594e4f74184-kube-api-access-q8gt4\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848146 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-images\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334e640-e3c2-4238-b7da-85e73bda80af-service-ca-bundle\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848270 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7pb\" (UniqueName: \"kubernetes.io/projected/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-kube-api-access-wq7pb\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jjs\" (UniqueName: \"kubernetes.io/projected/f65257ab-42e6-4f77-ab65-f9f762c8ae42-kube-api-access-n9jjs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848327 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38abc107-38ba-4e77-b00f-eece6eb28537-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848419 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848444 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-plugins-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848474 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlg7d\" (UniqueName: \"kubernetes.io/projected/0f7429df-aeda-4c76-9051-401488358e6c-kube-api-access-hlg7d\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848502 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848569 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjm7w\" (UniqueName: \"kubernetes.io/projected/66910c2a-724c-42a8-8511-a8ee6de7d140-kube-api-access-cjm7w\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848609 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-service-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-webhook-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848693 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848748 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848774 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k866t\" (UniqueName: \"kubernetes.io/projected/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-kube-api-access-k866t\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38abc107-38ba-4e77-b00f-eece6eb28537-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.851338 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-config\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.852581 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.352568388 +0000 UTC m=+142.313815421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.852631 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.853148 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.853871 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.854185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.855069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.859638 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.860628 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.863922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.864342 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-serving-cert\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.865825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.875511 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" event={"ID":"c38d45aa-0713-4059-8c2d-59a9b1cb5861","Type":"ContainerStarted","Data":"4ccef6209b460edd87c24008fcd6e78ad0415660bdd14124b8ede142bd3080ba"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.875614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" event={"ID":"c38d45aa-0713-4059-8c2d-59a9b1cb5861","Type":"ContainerStarted","Data":"83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.877729 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerStarted","Data":"fbcefed559af56f817450efb27400d18fbce7bb268fc0588724c1105ad41d38b"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.882573 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" event={"ID":"9e86ea88-60d1-4af7-8095-5ee44e176029","Type":"ContainerStarted","Data":"10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.884410 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerStarted","Data":"1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.884448 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerStarted","Data":"a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.885349 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.888059 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" event={"ID":"b609710f-4a90-417e-9e31-b1a045c1e8a2","Type":"ContainerStarted","Data":"494f52ceee5f880a7a8f0ddabc3bd4351cacf36bbd408a5967adf19cdb046599"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.888089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" event={"ID":"b609710f-4a90-417e-9e31-b1a045c1e8a2","Type":"ContainerStarted","Data":"88ee093657069d3707223ee8495ad4585134244f60a83a3e600efd920298b96d"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.890141 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" event={"ID":"7616472e-472c-4dfa-bf69-97d784e1e42f","Type":"ContainerStarted","Data":"9fcd808e8139932180f2fb427fe37499a4e962be7e99a65174ca99da5d94ff10"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.890171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" event={"ID":"7616472e-472c-4dfa-bf69-97d784e1e42f","Type":"ContainerStarted","Data":"b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.905543 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.916001 4979 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.916120 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config podName:ff61cd4b-2b9f-4588-be96-10038ccc4a92 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.416098247 +0000 UTC m=+142.377345280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config") pod "controller-manager-879f6c89f-4zkpx" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.920858 4979 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.921459 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert podName:45cde1ce-04ec-4fdd-bfc0-10d072a9eff1 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.421428425 +0000 UTC m=+142.382675458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert") pod "console-operator-58897d9998-l44fm" (UID: "45cde1ce-04ec-4fdd-bfc0-10d072a9eff1") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.921549 4979 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.921649 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config podName:45cde1ce-04ec-4fdd-bfc0-10d072a9eff1 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.42162773 +0000 UTC m=+142.382874933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config") pod "console-operator-58897d9998-l44fm" (UID: "45cde1ce-04ec-4fdd-bfc0-10d072a9eff1") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.926390 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.927092 4979 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.927179 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca podName:ff61cd4b-2b9f-4588-be96-10038ccc4a92 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.427157623 +0000 UTC m=+142.388404656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca") pod "controller-manager-879f6c89f-4zkpx" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.946447 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.950317 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.450274852 +0000 UTC m=+142.411521885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950370 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df702c9e-2d17-476e-9bbe-d41784bf809b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950476 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcrt\" (UniqueName: \"kubernetes.io/projected/241b3d1c-56ec-4088-bcfa-bea0aecea050-kube-api-access-llcrt\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950509 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6wl\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-kube-api-access-jg6wl\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950808 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rcg\" (UniqueName: \"kubernetes.io/projected/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-kube-api-access-s5rcg\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950913 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-registration-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951334 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-registration-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951368 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df702c9e-2d17-476e-9bbe-d41784bf809b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950943 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2063d8fc-0614-40e7-be84-ebfbda9acd89-config\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f0c12f1-c780-4020-921b-11e410503db3-proxy-tls\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951513 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7638c8d5-0616-4612-9d15-7594e4f74184-serving-cert\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951554 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9svw7\" (UniqueName: \"kubernetes.io/projected/4f0c12f1-c780-4020-921b-11e410503db3-kube-api-access-9svw7\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2063d8fc-0614-40e7-be84-ebfbda9acd89-config\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951588 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec159e5-6cc8-4130-a83c-ad402c63e175-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951628 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951650 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951680 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951701 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jwr\" (UniqueName: \"kubernetes.io/projected/4334e640-e3c2-4238-b7da-85e73bda80af-kube-api-access-v2jwr\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-profile-collector-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951752 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kndnb\" (UniqueName: \"kubernetes.io/projected/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-kube-api-access-kndnb\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951783 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-socket-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951801 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df702c9e-2d17-476e-9bbe-d41784bf809b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dda3a423-1b53-4e85-9ef1-123fe54ceb98-metrics-tls\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951836 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951851 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-mountpoint-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951872 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpx2n\" (UniqueName: \"kubernetes.io/projected/7ad194c8-35db-4a68-9c59-575a8971d714-kube-api-access-xpx2n\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951892 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq5s2\" (UniqueName: \"kubernetes.io/projected/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-kube-api-access-lq5s2\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951934 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951963 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7638c8d5-0616-4612-9d15-7594e4f74184-config\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-mountpoint-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951998 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952113 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7j5\" (UniqueName: \"kubernetes.io/projected/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-kube-api-access-6h7j5\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952128 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-socket-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952144 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f0c12f1-c780-4020-921b-11e410503db3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952418 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7b036f-4e32-47e9-b700-da7ef3615e4f-cert\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952450 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda3a423-1b53-4e85-9ef1-123fe54ceb98-trusted-ca\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952478 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2063d8fc-0614-40e7-be84-ebfbda9acd89-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952499 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-config\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-csi-data-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-apiservice-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-default-certificate\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952613 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8mn\" (UniqueName: \"kubernetes.io/projected/e7334e56-32c0-40f4-b60d-afab26024b6a-kube-api-access-fr8mn\") pod \"migrator-59844c95c7-s86jb\" (UID: \"e7334e56-32c0-40f4-b60d-afab26024b6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952637 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952676 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-images\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954277 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954379 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-serving-cert\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954389 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-csi-data-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954411 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8gt4\" (UniqueName: \"kubernetes.io/projected/7638c8d5-0616-4612-9d15-7594e4f74184-kube-api-access-q8gt4\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954605 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334e640-e3c2-4238-b7da-85e73bda80af-service-ca-bundle\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954674 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7pb\" (UniqueName: \"kubernetes.io/projected/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-kube-api-access-wq7pb\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jjs\" (UniqueName: \"kubernetes.io/projected/f65257ab-42e6-4f77-ab65-f9f762c8ae42-kube-api-access-n9jjs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954765 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38abc107-38ba-4e77-b00f-eece6eb28537-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954841 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-plugins-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954871 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlg7d\" (UniqueName: \"kubernetes.io/projected/0f7429df-aeda-4c76-9051-401488358e6c-kube-api-access-hlg7d\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.955015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f0c12f1-c780-4020-921b-11e410503db3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952712 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.957904 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.457878763 +0000 UTC m=+142.419125796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.957973 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-plugins-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.958101 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334e640-e3c2-4238-b7da-85e73bda80af-service-ca-bundle\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.958712 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-config\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959290 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjm7w\" (UniqueName: \"kubernetes.io/projected/66910c2a-724c-42a8-8511-a8ee6de7d140-kube-api-access-cjm7w\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959319 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-service-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-webhook-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k866t\" (UniqueName: \"kubernetes.io/projected/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-kube-api-access-k866t\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38abc107-38ba-4e77-b00f-eece6eb28537-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959501 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-stats-auth\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959559 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvnk\" (UniqueName: \"kubernetes.io/projected/7a7b036f-4e32-47e9-b700-da7ef3615e4f-kube-api-access-xvvnk\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959583 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-tmpfs\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959606 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpkp\" (UniqueName: \"kubernetes.io/projected/ed73bac2-f781-4475-b265-8c8820d10e3b-kube-api-access-ngpkp\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf74x\" (UniqueName: \"kubernetes.io/projected/5ec159e5-6cc8-4130-a83c-ad402c63e175-kube-api-access-lf74x\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959726 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-certs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959765 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df702c9e-2d17-476e-9bbe-d41784bf809b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959790 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ad194c8-35db-4a68-9c59-575a8971d714-signing-key\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960142 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-srv-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960182 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960208 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-metrics-certs\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960247 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2063d8fc-0614-40e7-be84-ebfbda9acd89-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxdm\" (UniqueName: \"kubernetes.io/projected/38abc107-38ba-4e77-b00f-eece6eb28537-kube-api-access-6dxdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960517 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed73bac2-f781-4475-b265-8c8820d10e3b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960548 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-config\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960655 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-proxy-tls\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960681 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-srv-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960702 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-client\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960904 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ad194c8-35db-4a68-9c59-575a8971d714-signing-cabundle\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960929 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960977 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed73bac2-f781-4475-b265-8c8820d10e3b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.961890 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-service-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.962222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda3a423-1b53-4e85-9ef1-123fe54ceb98-trusted-ca\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.962448 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dda3a423-1b53-4e85-9ef1-123fe54ceb98-metrics-tls\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.962702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-tmpfs\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.964061 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38abc107-38ba-4e77-b00f-eece6eb28537-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.964300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-config\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.965475 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed73bac2-f781-4475-b265-8c8820d10e3b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.966488 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-metrics-certs\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.968112 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.968827 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df702c9e-2d17-476e-9bbe-d41784bf809b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.968997 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-default-certificate\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.970486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.972900 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-serving-cert\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.973125 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38abc107-38ba-4e77-b00f-eece6eb28537-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.973464 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f0c12f1-c780-4020-921b-11e410503db3-proxy-tls\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.974772 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed73bac2-f781-4475-b265-8c8820d10e3b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.979284 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-client\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.976503 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2063d8fc-0614-40e7-be84-ebfbda9acd89-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.983174 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-stats-auth\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.985604 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.993699 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.007272 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.019399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.024281 4979 request.go:700] Waited for 1.008803642s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.026748 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.030849 4979 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.030986 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert podName:cc25d794-4ead-4436-a026-179f655c13d4 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.530946474 +0000 UTC m=+142.492193507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert") pod "console-f9d7485db-h6sv5" (UID: "cc25d794-4ead-4436-a026-179f655c13d4") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.034918 4979 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.035014 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert podName:d768fc5d-52c2-4901-a7cd-759d26f88251 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.534992827 +0000 UTC m=+142.496239860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert") pod "openshift-config-operator-7777fb866f-dqtmx" (UID: "d768fc5d-52c2-4901-a7cd-759d26f88251") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.047753 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.062796 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.063335 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.56330069 +0000 UTC m=+142.524547723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.064338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.065000 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.564991946 +0000 UTC m=+142.526238979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.067508 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.072732 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec159e5-6cc8-4130-a83c-ad402c63e175-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.085248 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.106676 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.126179 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.147251 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.165084 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.165251 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.665212289 +0000 UTC m=+142.626459322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.165811 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.166719 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.666698461 +0000 UTC m=+142.627945494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.168369 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.176760 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-webhook-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.179781 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-apiservice-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.202567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.220580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.267012 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.267447 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xm4\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-kube-api-access-q2xm4\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.267640 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.767616603 +0000 UTC m=+142.728863646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.273956 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.284502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.293289 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.303384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqmg\" (UniqueName: \"kubernetes.io/projected/4d2da2c2-6056-4902-a20b-19333d24a600-kube-api-access-bvqmg\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.306211 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.306767 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.326682 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.346216 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.358407 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.370465 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.370939 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.870923711 +0000 UTC m=+142.832170744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.373963 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.377336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.456690 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.456691 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.460607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.465727 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471064 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471591 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471721 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471823 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.474543 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cx5c\" (UniqueName: \"kubernetes.io/projected/26cfd7ef-1024-479e-bdc5-e39429a16ee5-kube-api-access-4cx5c\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.476721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.476871 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.976846581 +0000 UTC m=+142.938093614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.489400 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.497488 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-profile-collector-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.497495 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.500219 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.506783 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.519045 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ad194c8-35db-4a68-9c59-575a8971d714-signing-key\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.525922 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.545844 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.547990 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.557121 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.561625 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.566103 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.574876 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.575077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.575148 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.576316 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.076266462 +0000 UTC m=+143.037513495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.581002 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.585695 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:42:26 crc kubenswrapper[4979]: W0130 21:42:26.588016 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod828e6466_447a_47f9_9727_3992db7c27c9.slice/crio-deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096 WatchSource:0}: Error finding container deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096: Status 404 returned error can't find the container with id deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.596194 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ad194c8-35db-4a68-9c59-575a8971d714-signing-cabundle\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.604610 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ww6sg"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.605869 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.627078 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.647157 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.658679 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7638c8d5-0616-4612-9d15-7594e4f74184-serving-cert\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.666776 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.669846 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7638c8d5-0616-4612-9d15-7594e4f74184-config\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.679578 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.679768 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.179737045 +0000 UTC m=+143.140984078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.680005 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.681121 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.181095613 +0000 UTC m=+143.142342646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.690863 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.707368 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.721464 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.730602 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.747019 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.757162 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-srv-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.766720 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.769590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-images\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.782299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.782424 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.282396145 +0000 UTC m=+143.243643188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.785797 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.786521 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.286494019 +0000 UTC m=+143.247741052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.793570 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.798683 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.800367 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-proxy-tls\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.808147 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.809235 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.827217 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.844532 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-srv-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.846319 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.865910 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.878848 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7b036f-4e32-47e9-b700-da7ef3615e4f-cert\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.886262 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.887499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.887691 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.387662178 +0000 UTC m=+143.348909211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.888018 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.888581 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.388565703 +0000 UTC m=+143.349812736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.907098 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.916412 4979 generic.go:334] "Generic (PLEG): container finished" podID="daf9c301-ff6e-47d9-a8a0-d88e6cf53d48" containerID="482279a721847b918c5fc4616a62f3a67d742bc6c5938bc4828e9ca15dcb97ba" exitCode=0 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.917213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerDied","Data":"482279a721847b918c5fc4616a62f3a67d742bc6c5938bc4828e9ca15dcb97ba"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.919864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" event={"ID":"814afa6a-716d-4011-89f9-6ccbc336e361","Type":"ContainerStarted","Data":"403cfba5196c5bf67f4cf059ebba316fa3e60cbe9e05f15ef7389dc5b80b5070"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.920930 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" event={"ID":"4d2da2c2-6056-4902-a20b-19333d24a600","Type":"ContainerStarted","Data":"06fb6f96cf1b0beeb7c7f19e2a0d2bdb71e4fd36261a8d789489994922e28ca6"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.926370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerStarted","Data":"1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.926424 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerStarted","Data":"deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.926910 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.928892 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.928904 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.928999 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.930784 4979 generic.go:334] "Generic (PLEG): container finished" podID="9e86ea88-60d1-4af7-8095-5ee44e176029" containerID="a3d00ba6590a18fe81a8591db85e915deceb9dbe89e5165b070d6df1277064b1" exitCode=0 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.930857 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" event={"ID":"9e86ea88-60d1-4af7-8095-5ee44e176029","Type":"ContainerDied","Data":"a3d00ba6590a18fe81a8591db85e915deceb9dbe89e5165b070d6df1277064b1"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.933742 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerStarted","Data":"246d40c550fcc6c9fdc34ebbfdb6355e89a001f7901886dab00180fbdbb32fa5"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.938596 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" event={"ID":"c38d45aa-0713-4059-8c2d-59a9b1cb5861","Type":"ContainerStarted","Data":"0d246b15f86d8f2da268ea26abfe13d934e17724e430a831e1809a5e4c519a8d"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.943304 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" event={"ID":"7616472e-472c-4dfa-bf69-97d784e1e42f","Type":"ContainerStarted","Data":"4b1948e08ff729819e269849dbc9d1ac0e3c6abb56999ef9b7f0cf2ca265909c"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.943989 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.947441 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.947495 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.947771 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.951897 4979 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.952000 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume podName:ebc2a677-6e7a-41ce-a3f4-063acddaa66b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.451977798 +0000 UTC m=+143.413224841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume") pod "dns-default-464m7" (UID: "ebc2a677-6e7a-41ce-a3f4-063acddaa66b") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.961140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-certs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.963581 4979 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.963666 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls podName:ebc2a677-6e7a-41ce-a3f4-063acddaa66b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.463646761 +0000 UTC m=+143.424893794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls") pod "dns-default-464m7" (UID: "ebc2a677-6e7a-41ce-a3f4-063acddaa66b") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.965206 4979 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.965322 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token podName:f65257ab-42e6-4f77-ab65-f9f762c8ae42 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.465287446 +0000 UTC m=+143.426534479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token") pod "machine-config-server-2zdrx" (UID: "f65257ab-42e6-4f77-ab65-f9f762c8ae42") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.965401 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.986798 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.989630 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.991803 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.491779629 +0000 UTC m=+143.453026662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.006974 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.027723 4979 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.044304 4979 request.go:700] Waited for 1.93723165s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.046789 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.066956 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.086732 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.092192 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.092747 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.592721402 +0000 UTC m=+143.553968435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.106167 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.121823 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.126162 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.136431 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.166682 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.186239 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.194925 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.195320 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.695286859 +0000 UTC m=+143.656533892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.195527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.195947 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.695928177 +0000 UTC m=+143.657175210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.224095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcrm\" (UniqueName: \"kubernetes.io/projected/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-kube-api-access-6tcrm\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.249834 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.267074 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.268830 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.277323 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.296835 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.297478 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.797448946 +0000 UTC m=+143.758695999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.311894 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcrt\" (UniqueName: \"kubernetes.io/projected/241b3d1c-56ec-4088-bcfa-bea0aecea050-kube-api-access-llcrt\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.321986 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6wl\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-kube-api-access-jg6wl\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.346621 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rcg\" (UniqueName: \"kubernetes.io/projected/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-kube-api-access-s5rcg\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.363930 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9svw7\" (UniqueName: \"kubernetes.io/projected/4f0c12f1-c780-4020-921b-11e410503db3-kube-api-access-9svw7\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.382356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.399567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.400200 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.900172639 +0000 UTC m=+143.861419672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.404988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.411594 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.425804 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jwr\" (UniqueName: \"kubernetes.io/projected/4334e640-e3c2-4238-b7da-85e73bda80af-kube-api-access-v2jwr\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.442542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.448698 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndnb\" (UniqueName: \"kubernetes.io/projected/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-kube-api-access-kndnb\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.464942 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpx2n\" (UniqueName: \"kubernetes.io/projected/7ad194c8-35db-4a68-9c59-575a8971d714-kube-api-access-xpx2n\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.472136 4979 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.472248 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert podName:45cde1ce-04ec-4fdd-bfc0-10d072a9eff1 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.472223732 +0000 UTC m=+144.433470765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert") pod "console-operator-58897d9998-l44fm" (UID: "45cde1ce-04ec-4fdd-bfc0-10d072a9eff1") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.474472 4979 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.474557 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca podName:ff61cd4b-2b9f-4588-be96-10038ccc4a92 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.474537956 +0000 UTC m=+144.435784989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca") pod "controller-manager-879f6c89f-4zkpx" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.516789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.516977 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.517362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.517462 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.517512 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.017483005 +0000 UTC m=+143.978730098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.517649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.518614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.519082 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.523890 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.526261 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.551656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7j5\" (UniqueName: \"kubernetes.io/projected/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-kube-api-access-6h7j5\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.555558 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8gt4\" (UniqueName: \"kubernetes.io/projected/7638c8d5-0616-4612-9d15-7594e4f74184-kube-api-access-q8gt4\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.557841 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq5s2\" (UniqueName: \"kubernetes.io/projected/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-kube-api-access-lq5s2\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.559751 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.562750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlg7d\" (UniqueName: \"kubernetes.io/projected/0f7429df-aeda-4c76-9051-401488358e6c-kube-api-access-hlg7d\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.571864 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.590511 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.592374 4979 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.592454 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert podName:cc25d794-4ead-4436-a026-179f655c13d4 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.592426728 +0000 UTC m=+144.553673761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert") pod "console-f9d7485db-h6sv5" (UID: "cc25d794-4ead-4436-a026-179f655c13d4") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.599587 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jjs\" (UniqueName: \"kubernetes.io/projected/f65257ab-42e6-4f77-ab65-f9f762c8ae42-kube-api-access-n9jjs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.602193 4979 csr.go:261] certificate signing request csr-rvdws is approved, waiting to be issued Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.604075 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7pb\" (UniqueName: \"kubernetes.io/projected/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-kube-api-access-wq7pb\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.608934 4979 csr.go:257] certificate signing request csr-rvdws is issued Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.617118 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.620315 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.620733 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8mn\" (UniqueName: \"kubernetes.io/projected/e7334e56-32c0-40f4-b60d-afab26024b6a-kube-api-access-fr8mn\") pod \"migrator-59844c95c7-s86jb\" (UID: \"e7334e56-32c0-40f4-b60d-afab26024b6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.620658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.621173 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.121149043 +0000 UTC m=+144.082396076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.627756 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.632185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rg56\" (UniqueName: \"kubernetes.io/projected/d768fc5d-52c2-4901-a7cd-759d26f88251-kube-api-access-5rg56\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.636336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.654574 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.667489 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.723012 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.723200 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.223175715 +0000 UTC m=+144.184422758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.724236 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.724365 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.724836 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.224822471 +0000 UTC m=+144.186069504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.754517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.767428 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.776739 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvnk\" (UniqueName: \"kubernetes.io/projected/7a7b036f-4e32-47e9-b700-da7ef3615e4f-kube-api-access-xvvnk\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.785826 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2063d8fc-0614-40e7-be84-ebfbda9acd89-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.797908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.802779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k866t\" (UniqueName: \"kubernetes.io/projected/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-kube-api-access-k866t\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.803307 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjm7w\" (UniqueName: \"kubernetes.io/projected/66910c2a-724c-42a8-8511-a8ee6de7d140-kube-api-access-cjm7w\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.803925 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df702c9e-2d17-476e-9bbe-d41784bf809b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.804621 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpkp\" (UniqueName: \"kubernetes.io/projected/ed73bac2-f781-4475-b265-8c8820d10e3b-kube-api-access-ngpkp\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.807768 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf74x\" (UniqueName: \"kubernetes.io/projected/5ec159e5-6cc8-4130-a83c-ad402c63e175-kube-api-access-lf74x\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.809486 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.824289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxdm\" (UniqueName: \"kubernetes.io/projected/38abc107-38ba-4e77-b00f-eece6eb28537-kube-api-access-6dxdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.825926 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.826024 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.826345 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.826613 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.326582666 +0000 UTC m=+144.287829719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.826731 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.827186 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.327177143 +0000 UTC m=+144.288424186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.838184 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.849772 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.849885 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.852110 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.866277 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.870310 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.886545 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.901505 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.901665 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.903979 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.928382 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.929516 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.429492714 +0000 UTC m=+144.390739757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.937418 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.946597 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.960568 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podStartSLOduration=122.960538963 podStartE2EDuration="2m2.960538963s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:27.945091845 +0000 UTC m=+143.906338878" watchObservedRunningTime="2026-01-30 21:42:27.960538963 +0000 UTC m=+143.921785996" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.041556 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.041785 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.042086 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.542067158 +0000 UTC m=+144.503314191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.076073 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.082630 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffscn"] Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.100807 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" event={"ID":"9e86ea88-60d1-4af7-8095-5ee44e176029","Type":"ContainerStarted","Data":"5a4782cd5642ee53b3e77a36068ed257bcf3fcb651cda8c1cd1324fc8f074ca4"} Jan 30 21:42:28 crc kubenswrapper[4979]: W0130 21:42:28.133704 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9f4663_eacb_4b8f_b468_a1ee9e078f99.slice/crio-c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a WatchSource:0}: Error finding container c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a: Status 404 returned error can't find the container with id c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.136206 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.143075 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.643022932 +0000 UTC m=+144.604269965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.142836 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.143710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.145286 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.645276774 +0000 UTC m=+144.606523807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.200642 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" event={"ID":"4d2da2c2-6056-4902-a20b-19333d24a600","Type":"ContainerStarted","Data":"d0dc7790a29609475ad50a49c09ae26499280e6187466ce0945f8ee13190eded"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.200714 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" event={"ID":"4d2da2c2-6056-4902-a20b-19333d24a600","Type":"ContainerStarted","Data":"6017def96b6caa60065d5429a60ce72361ce49cd895b8bb21850f07af87fcac5"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.204942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2zdrx" event={"ID":"f65257ab-42e6-4f77-ab65-f9f762c8ae42","Type":"ContainerStarted","Data":"72601cf56f002824175ecf09efe07e763f78bbb4a3525556dc1397d3e6a7cef6"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.208455 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" event={"ID":"814afa6a-716d-4011-89f9-6ccbc336e361","Type":"ContainerStarted","Data":"20579b9842ae5964bc446dd49eeed58edd6c727d5884cdd2adf2f376b145b284"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.217019 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerStarted","Data":"81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.218111 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.219750 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hgm9w" event={"ID":"4334e640-e3c2-4238-b7da-85e73bda80af","Type":"ContainerStarted","Data":"2ea139e003b79a475feee14a42275d9f7453eb9a9213db279e7f3471a5ff7868"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.219785 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hgm9w" event={"ID":"4334e640-e3c2-4238-b7da-85e73bda80af","Type":"ContainerStarted","Data":"6e063486fd7b148044d8682d6785ff2daefca61dffb3e8d16fef1c823e967646"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.223016 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerStarted","Data":"ff4310f7f3e5e9a2bd7e8ab4af2af7190f06dac0bc572790cf55ed3c145c3133"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.223072 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerStarted","Data":"e523f03cf34fb250e7c923c4e51c4d22ee5c1f909f78da9e21089ceadcbd7bfc"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.225188 4979 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8pq8k container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.225247 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.245772 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.245936 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.745899128 +0000 UTC m=+144.707146171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.246209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.246659 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.746648999 +0000 UTC m=+144.707896022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.281749 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" event={"ID":"26cfd7ef-1024-479e-bdc5-e39429a16ee5","Type":"ContainerStarted","Data":"d77dcc7b8d4eda94c65546c40f447fa33c1a26c7778842c278f2dd62a625995a"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.287463 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.287538 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.282147 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" event={"ID":"26cfd7ef-1024-479e-bdc5-e39429a16ee5","Type":"ContainerStarted","Data":"e1843cd1c21062a77b4e66267aeb5c358c2f352021d86e7e0e02dc5a10c4056b"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.293266 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" event={"ID":"26cfd7ef-1024-479e-bdc5-e39429a16ee5","Type":"ContainerStarted","Data":"420f21b838d0534acddc58f9a6bf76f7eb31055abbdf16d4c574fa64c4292182"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.347223 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.349200 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.849181465 +0000 UTC m=+144.810428498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.356454 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" podStartSLOduration=124.356431447 podStartE2EDuration="2m4.356431447s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:28.340711641 +0000 UTC m=+144.301958674" watchObservedRunningTime="2026-01-30 21:42:28.356431447 +0000 UTC m=+144.317678480" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.449965 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.450542 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.95052541 +0000 UTC m=+144.911772443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.478214 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c"] Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.551932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.552130 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.05209188 +0000 UTC m=+145.013338913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.552380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.552437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.552551 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.552970 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.052948054 +0000 UTC m=+145.014195087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.553701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.561269 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.565250 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.623308 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.623909 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.624613 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 21:37:27 +0000 UTC, rotation deadline is 2026-10-22 18:22:12.765122248 +0000 UTC Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.624671 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6356h39m44.14045278s for next certificate rotation Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.633201 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.656173 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.656506 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.657450 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.657513 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.157476186 +0000 UTC m=+145.118723229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.764502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.773426 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.773948 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.273927088 +0000 UTC m=+145.235174121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.775761 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.875462 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.876139 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.376114915 +0000 UTC m=+145.337361948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.883365 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.977000 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.977497 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.47748029 +0000 UTC m=+145.438727323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.016251 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" podStartSLOduration=124.016223162 podStartE2EDuration="2m4.016223162s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.001240367 +0000 UTC m=+144.962487420" watchObservedRunningTime="2026-01-30 21:42:29.016223162 +0000 UTC m=+144.977470215" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.078120 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.078472 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.578447553 +0000 UTC m=+145.539694596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.078559 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.079156 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.579144143 +0000 UTC m=+145.540391186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.179911 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.180418 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.680383974 +0000 UTC m=+145.641630997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.186860 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.187417 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.687396878 +0000 UTC m=+145.648643911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.210772 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" podStartSLOduration=125.210749144 podStartE2EDuration="2m5.210749144s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.154484108 +0000 UTC m=+145.115731151" watchObservedRunningTime="2026-01-30 21:42:29.210749144 +0000 UTC m=+145.171996177" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.295965 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.296464 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.796424864 +0000 UTC m=+145.757671897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.296598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.297048 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.797025381 +0000 UTC m=+145.758272414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.313389 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" event={"ID":"241b3d1c-56ec-4088-bcfa-bea0aecea050","Type":"ContainerStarted","Data":"f43255147e69e26f8a1fe665fac7bfec86a12ed20ef88126278fe472cc7b9de6"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.313481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" event={"ID":"241b3d1c-56ec-4088-bcfa-bea0aecea050","Type":"ContainerStarted","Data":"0d26f89e2a172e315969f36b15e192c68b244c9952df301c48d811d675ba11ba"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.314432 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hwb2t" podStartSLOduration=124.314407232 podStartE2EDuration="2m4.314407232s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.311743268 +0000 UTC m=+145.272990321" watchObservedRunningTime="2026-01-30 21:42:29.314407232 +0000 UTC m=+145.275654265" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.316285 4979 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cxp2c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.316381 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" podUID="241b3d1c-56ec-4088-bcfa-bea0aecea050" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.316759 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.325310 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerStarted","Data":"72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.325388 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerStarted","Data":"f9092fc40924a5c4c5ccda219effa1674a3cd66531deeb6ed63c03f809984b37"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.370634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2zdrx" event={"ID":"f65257ab-42e6-4f77-ab65-f9f762c8ae42","Type":"ContainerStarted","Data":"5f161c0de437e434f21dee588b1079d8ddca7bea83a3a66cc1adaeb5a3bc615c"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.405558 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.406407 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.906382267 +0000 UTC m=+145.867629300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.408211 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" event={"ID":"0f9f4663-eacb-4b8f-b468-a1ee9e078f99","Type":"ContainerStarted","Data":"599addbaf40f39e79f4307282b574e1f3829e30d728a7056008b55d63b9a9a52"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.408297 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" event={"ID":"0f9f4663-eacb-4b8f-b468-a1ee9e078f99","Type":"ContainerStarted","Data":"c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.508475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.516934 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.016907594 +0000 UTC m=+145.978154627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.603022 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.611501 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.612216 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.112187231 +0000 UTC m=+146.073434264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.612879 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:29 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:29 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:29 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.612992 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.626403 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2zdrx" podStartSLOduration=5.626371344 podStartE2EDuration="5.626371344s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.621849788 +0000 UTC m=+145.583096821" watchObservedRunningTime="2026-01-30 21:42:29.626371344 +0000 UTC m=+145.587618377" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.680518 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" podStartSLOduration=125.6804695 podStartE2EDuration="2m5.6804695s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.672798628 +0000 UTC m=+145.634045661" watchObservedRunningTime="2026-01-30 21:42:29.6804695 +0000 UTC m=+145.641716523" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.713532 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.713915 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.213895175 +0000 UTC m=+146.175142198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.723835 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hgm9w" podStartSLOduration=124.723806439 podStartE2EDuration="2m4.723806439s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.720176449 +0000 UTC m=+145.681423482" watchObservedRunningTime="2026-01-30 21:42:29.723806439 +0000 UTC m=+145.685053482" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.753734 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" podStartSLOduration=125.753708067 podStartE2EDuration="2m5.753708067s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.751571388 +0000 UTC m=+145.712818421" watchObservedRunningTime="2026-01-30 21:42:29.753708067 +0000 UTC m=+145.714955110" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.782725 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" podStartSLOduration=124.782701119 podStartE2EDuration="2m4.782701119s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.781409233 +0000 UTC m=+145.742656276" watchObservedRunningTime="2026-01-30 21:42:29.782701119 +0000 UTC m=+145.743948152" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.814750 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.815357 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.315321912 +0000 UTC m=+146.276568945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.831002 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" podStartSLOduration=124.830981895 podStartE2EDuration="2m4.830981895s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.8297076 +0000 UTC m=+145.790954633" watchObservedRunningTime="2026-01-30 21:42:29.830981895 +0000 UTC m=+145.792228918" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.866545 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" podStartSLOduration=125.866528148 podStartE2EDuration="2m5.866528148s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.863994948 +0000 UTC m=+145.825241981" watchObservedRunningTime="2026-01-30 21:42:29.866528148 +0000 UTC m=+145.827775181" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.907324 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" podStartSLOduration=124.907300166 podStartE2EDuration="2m4.907300166s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.906153304 +0000 UTC m=+145.867400347" watchObservedRunningTime="2026-01-30 21:42:29.907300166 +0000 UTC m=+145.868547189" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.918000 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.918481 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.418468865 +0000 UTC m=+146.379715898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.940192 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" podStartSLOduration=124.940165816 podStartE2EDuration="2m4.940165816s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.939641531 +0000 UTC m=+145.900888554" watchObservedRunningTime="2026-01-30 21:42:29.940165816 +0000 UTC m=+145.901412849" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.957987 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5"] Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.964297 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.002934 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" podStartSLOduration=125.002901992 podStartE2EDuration="2m5.002901992s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.990308143 +0000 UTC m=+145.951555176" watchObservedRunningTime="2026-01-30 21:42:30.002901992 +0000 UTC m=+145.964149025" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.019373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.019578 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.519545331 +0000 UTC m=+146.480792364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.019704 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.020827 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.520805056 +0000 UTC m=+146.482052089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.034024 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" podStartSLOduration=125.033993312 podStartE2EDuration="2m5.033993312s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:30.025422345 +0000 UTC m=+145.986669378" watchObservedRunningTime="2026-01-30 21:42:30.033993312 +0000 UTC m=+145.995240345" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.121813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.122092 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.622067729 +0000 UTC m=+146.583314762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.122431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.123507 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.623470678 +0000 UTC m=+146.584717711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.132179 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-464m7"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.202121 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.203683 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.219727 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.222661 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.224136 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.224303 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.724277757 +0000 UTC m=+146.685524790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.224388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.224735 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.724728519 +0000 UTC m=+146.685975552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.242118 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjfp6"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.249053 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tbr4j"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.254342 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.314291 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.320492 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb"] Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.323068 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6952a3c6_a471_489c_ba9a_9e4b5e9ac362.slice/crio-359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba WatchSource:0}: Error finding container 359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba: Status 404 returned error can't find the container with id 359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.325663 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.326154 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.826134834 +0000 UTC m=+146.787381867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.356505 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.356978 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.369735 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.369784 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.370646 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2"] Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.378493 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7334e56_32c0_40f4_b60d_afab26024b6a.slice/crio-b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9 WatchSource:0}: Error finding container b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9: Status 404 returned error can't find the container with id b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9 Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.382778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.392398 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.402635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.419296 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.421240 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-969ns"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.427543 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.428016 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.927995623 +0000 UTC m=+146.889242656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.429664 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f0c12f1_c780_4020_921b_11e410503db3.slice/crio-0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a WatchSource:0}: Error finding container 0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a: Status 404 returned error can't find the container with id 0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.438558 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.438966 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.469770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" event={"ID":"7ad194c8-35db-4a68-9c59-575a8971d714","Type":"ContainerStarted","Data":"5ef9ea1a8b1714ad37c66c921f63b0c32a57051cb23f65b630ba25107f1ba693"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.491823 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.491906 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" event={"ID":"7638c8d5-0616-4612-9d15-7594e4f74184","Type":"ContainerStarted","Data":"fb302f7dcc4d9c0fce298ad934f5dba2ebf56dbb724e75125c2c1b0501f98e6b"} Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.504890 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod531bdeb2_b55c_4a3b_8fb5_1dca8478c479.slice/crio-95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f WatchSource:0}: Error finding container 95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f: Status 404 returned error can't find the container with id 95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.512692 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" event={"ID":"2063d8fc-0614-40e7-be84-ebfbda9acd89","Type":"ContainerStarted","Data":"39a7011e8f66e3083fbba9c04e7ba4c433ae4f32a904ca396f8fb210e2373cda"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.528533 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"f3945d4121246c159397b7d4ada9093e0f33963deed010e1411b121d07437a1c"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.529318 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.529740 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.029722098 +0000 UTC m=+146.990969131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.535806 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" event={"ID":"6ebf43de-28a1-4cb6-a008-7bcc970b96ac","Type":"ContainerStarted","Data":"bdd825199390501468faf02b4ae1c5e76e7a754a355a385686ae77097aa84e4f"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.540094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" event={"ID":"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564","Type":"ContainerStarted","Data":"f7169b345cdc05e29122d7abb87bd972d6e49b98973dcda6ae1ec411ac695143"} Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.551419 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf702c9e_2d17_476e_9bbe_d41784bf809b.slice/crio-285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7 WatchSource:0}: Error finding container 285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7: Status 404 returned error can't find the container with id 285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7 Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.553134 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lbd69"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.569446 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" event={"ID":"38abc107-38ba-4e77-b00f-eece6eb28537","Type":"ContainerStarted","Data":"e6ea3e6c09d3bc197d04de59fd80b4e6ca76b1c9554cecc917347193af1dfdbf"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.574634 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:30 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:30 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:30 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.574726 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.576550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" event={"ID":"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f","Type":"ContainerStarted","Data":"cd1146b8dfdad63d84be6913eeb3b6510467eb0c1ad861abd25f98ff51cc56fb"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.576605 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" event={"ID":"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f","Type":"ContainerStarted","Data":"03021ea95fef2aa2eca6cd5af517b1bd721a3b5fc1066c71f3b5ddcabfe8773a"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.579082 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" event={"ID":"e7334e56-32c0-40f4-b60d-afab26024b6a","Type":"ContainerStarted","Data":"b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.581014 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-464m7" event={"ID":"ebc2a677-6e7a-41ce-a3f4-063acddaa66b","Type":"ContainerStarted","Data":"072830c82b46453c7855f50c6e6f087a9cac16cd2584213d91cbbdf0bf5325a7"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.587341 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.588092 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.598105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" event={"ID":"dda3a423-1b53-4e85-9ef1-123fe54ceb98","Type":"ContainerStarted","Data":"1b255e12b1ce330be94703262e51b96d275cbbe7182502b8b511460d87f0dbac"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.598182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" event={"ID":"dda3a423-1b53-4e85-9ef1-123fe54ceb98","Type":"ContainerStarted","Data":"bf8643b5b61a4e7762bd6e45baf2c2788e0ee194d555a6f7af296867b9d36f21"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.601137 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" event={"ID":"6952a3c6-a471-489c-ba9a-9e4b5e9ac362","Type":"ContainerStarted","Data":"359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.607784 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-trsfj"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.622868 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.637485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.637958 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.137938372 +0000 UTC m=+147.099185405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.648790 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l44fm"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.689908 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.739221 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.741263 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.24123072 +0000 UTC m=+147.202477753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.842061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.842962 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.342948845 +0000 UTC m=+147.304195878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.886769 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45cde1ce_04ec_4fdd_bfc0_10d072a9eff1.slice/crio-d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f WatchSource:0}: Error finding container d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f: Status 404 returned error can't find the container with id d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.943211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.943530 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.443488885 +0000 UTC m=+147.404735918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.943720 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.944112 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.444095583 +0000 UTC m=+147.405342616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.046692 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.047314 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.547289668 +0000 UTC m=+147.508536701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.095334 4979 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tdvvn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]log ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]etcd ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/max-in-flight-filter ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 21:42:31 crc kubenswrapper[4979]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-startinformers ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 21:42:31 crc kubenswrapper[4979]: livez check failed Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.095440 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" podUID="daf9c301-ff6e-47d9-a8a0-d88e6cf53d48" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.149528 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.150205 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.650167774 +0000 UTC m=+147.611414807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.251528 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.251738 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.751707564 +0000 UTC m=+147.712954597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.251811 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.252542 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.752534716 +0000 UTC m=+147.713781749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.353690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.355147 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.854337134 +0000 UTC m=+147.815584167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.455844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.456668 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.956640134 +0000 UTC m=+147.917887227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.560774 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.561298 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.061271769 +0000 UTC m=+148.022518802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.561444 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.562073 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.06206025 +0000 UTC m=+148.023307283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.567090 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:31 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.567301 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.629972 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-464m7" event={"ID":"ebc2a677-6e7a-41ce-a3f4-063acddaa66b","Type":"ContainerStarted","Data":"b3c026fa420e4606ac34e547a6b7a85a7e573cef9bcf034ca99faf6e1d1f8690"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.634995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lbd69" event={"ID":"7a7b036f-4e32-47e9-b700-da7ef3615e4f","Type":"ContainerStarted","Data":"bd0ce3b9147a1ce3bbccee527472379520f0e1932c76c405f3cb2ccafdfe4f23"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.638282 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" event={"ID":"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f","Type":"ContainerStarted","Data":"9704274a3f33f8474fb59cb4da6e1481581b742d3433473db1ff59652cc6bad4"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.640213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerStarted","Data":"0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.640260 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerStarted","Data":"2fdea5ec5c945a9b137321bd0204027de83c52d16c6cd7e9cca2d07e312e0fe5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.641026 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" event={"ID":"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d","Type":"ContainerStarted","Data":"e21e27f1657aa3412b2bbaae9b4d978e7dfbbe471a5bcfabcfd9847fa5154869"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.647413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" event={"ID":"ed73bac2-f781-4475-b265-8c8820d10e3b","Type":"ContainerStarted","Data":"cfbfda9a20adf2a5b922c67c6eee61e1ebeffd08961856293dc8c03148aa86f5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.649343 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerStarted","Data":"585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.649378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerStarted","Data":"77916c27a3bed0009808e06c73482e7ba563d922fb5c460a56269b992ef94952"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.650377 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.652644 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" event={"ID":"6952a3c6-a471-489c-ba9a-9e4b5e9ac362","Type":"ContainerStarted","Data":"2f767921d1a6fcfe8ec614441edeb634f4060c8dd04fa012ee77b523d248b6de"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.653374 4979 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lzp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.653380 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.653418 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.654628 4979 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6285m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.654660 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" podUID="6952a3c6-a471-489c-ba9a-9e4b5e9ac362" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.661597 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" podStartSLOduration=126.661575874 podStartE2EDuration="2m6.661575874s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.658715125 +0000 UTC m=+147.619962158" watchObservedRunningTime="2026-01-30 21:42:31.661575874 +0000 UTC m=+147.622822907" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.663442 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" event={"ID":"6ebf43de-28a1-4cb6-a008-7bcc970b96ac","Type":"ContainerStarted","Data":"c7062503aa0d42950ff3ebc012cb84f3dee665b71c85823f80ea9ee149341f67"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.664435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.665101 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.165082051 +0000 UTC m=+148.126329094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.675918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" event={"ID":"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564","Type":"ContainerStarted","Data":"cc423f640a6c1f728bdf896e80aa1e69eac802aa858ff5796ad035df2aaf7dc5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.677253 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.685152 4979 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j5jdh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.685214 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" podUID="f1ebd25b-fae4-4659-ab8c-e57b0e9d9564" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.696899 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" event={"ID":"d768fc5d-52c2-4901-a7cd-759d26f88251","Type":"ContainerStarted","Data":"e2e02e31f3aabd3d8a1cf93131c32a8e0598193e429437391912bab37c40db11"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.708893 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" podStartSLOduration=126.708873763 podStartE2EDuration="2m6.708873763s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.707461364 +0000 UTC m=+147.668708397" watchObservedRunningTime="2026-01-30 21:42:31.708873763 +0000 UTC m=+147.670120796" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.709016 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podStartSLOduration=126.709009676 podStartE2EDuration="2m6.709009676s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.68490807 +0000 UTC m=+147.646155103" watchObservedRunningTime="2026-01-30 21:42:31.709009676 +0000 UTC m=+147.670256699" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.711499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" event={"ID":"66910c2a-724c-42a8-8511-a8ee6de7d140","Type":"ContainerStarted","Data":"2187665181f7367677f2c1b881a03ee8da637754087e54f885ca01e2dc936f43"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.716686 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" event={"ID":"5ec159e5-6cc8-4130-a83c-ad402c63e175","Type":"ContainerStarted","Data":"fb7e4a3ec1ad847ba658d47ac8876c6f93045c61520a643b940a25449f568fab"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.718339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" event={"ID":"df702c9e-2d17-476e-9bbe-d41784bf809b","Type":"ContainerStarted","Data":"285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.720005 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" event={"ID":"4f0c12f1-c780-4020-921b-11e410503db3","Type":"ContainerStarted","Data":"6e1e9e6deb3a154c5b70c3d0fc41ce67d8193ceed8421a0bd23df8f6bbefcf82"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.720054 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" event={"ID":"4f0c12f1-c780-4020-921b-11e410503db3","Type":"ContainerStarted","Data":"0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.721278 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" event={"ID":"531bdeb2-b55c-4a3b-8fb5-1dca8478c479","Type":"ContainerStarted","Data":"95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.722302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerStarted","Data":"964c8b1ba5415a6ffab5411d004a571cd2b1dc55669379c6f25606fce00667e5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.723153 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l44fm" event={"ID":"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1","Type":"ContainerStarted","Data":"d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.726099 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" event={"ID":"dda3a423-1b53-4e85-9ef1-123fe54ceb98","Type":"ContainerStarted","Data":"b9a563be9831c29811a1e48898f52a6678ef7612f03e41eb2ca6c66ea2fba85a"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.731456 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" podStartSLOduration=126.731432867 podStartE2EDuration="2m6.731432867s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.727172089 +0000 UTC m=+147.688419122" watchObservedRunningTime="2026-01-30 21:42:31.731432867 +0000 UTC m=+147.692679900" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.733743 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" event={"ID":"7ad194c8-35db-4a68-9c59-575a8971d714","Type":"ContainerStarted","Data":"46f64f4bbb3ee52e01fe4fc6d1e4c9b080bec1744acc2f526ac779eea222e447"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.737270 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" event={"ID":"7638c8d5-0616-4612-9d15-7594e4f74184","Type":"ContainerStarted","Data":"b7fba9ff02b4535cb5aa87018eb11f5295e464a8f326227ae83662f7e182723e"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.739366 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" event={"ID":"38abc107-38ba-4e77-b00f-eece6eb28537","Type":"ContainerStarted","Data":"eb0fadc1ba1644ce574ed94626a635db9b6003b8cf16bb3ff670e8f86fc0cd06"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.747881 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" podStartSLOduration=126.747858541 podStartE2EDuration="2m6.747858541s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.747355778 +0000 UTC m=+147.708602811" watchObservedRunningTime="2026-01-30 21:42:31.747858541 +0000 UTC m=+147.709105574" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.766794 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.768384 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.268366059 +0000 UTC m=+148.229613092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.769640 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" podStartSLOduration=126.769578842 podStartE2EDuration="2m6.769578842s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.768300037 +0000 UTC m=+147.729547080" watchObservedRunningTime="2026-01-30 21:42:31.769578842 +0000 UTC m=+147.730825885" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.793783 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" podStartSLOduration=126.793760342 podStartE2EDuration="2m6.793760342s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.792227829 +0000 UTC m=+147.753474862" watchObservedRunningTime="2026-01-30 21:42:31.793760342 +0000 UTC m=+147.755007375" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.814634 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" podStartSLOduration=126.814613299 podStartE2EDuration="2m6.814613299s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.814545476 +0000 UTC m=+147.775792509" watchObservedRunningTime="2026-01-30 21:42:31.814613299 +0000 UTC m=+147.775860332" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.834626 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" podStartSLOduration=126.834576531 podStartE2EDuration="2m6.834576531s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.833911312 +0000 UTC m=+147.795158345" watchObservedRunningTime="2026-01-30 21:42:31.834576531 +0000 UTC m=+147.795823564" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.868695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.870809 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.370768272 +0000 UTC m=+148.332015535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.971076 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.971485 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.471468129 +0000 UTC m=+148.432715162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.039948 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.040253 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.072643 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.072911 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.572865054 +0000 UTC m=+148.534112097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.073908 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.074388 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.574368065 +0000 UTC m=+148.535615098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.176950 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.178474 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.179399 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.179630 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.679593116 +0000 UTC m=+148.640840149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.179816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.180307 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.680297196 +0000 UTC m=+148.641544229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.182681 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.193446 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.282165 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.283193 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.283363 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.283632 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.285356 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.785299222 +0000 UTC m=+148.746546255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.361097 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.362319 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.364990 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.377508 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.397849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.398549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.398580 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.398609 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.398686 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.898662708 +0000 UTC m=+148.859909741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.399362 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.399626 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.429154 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506463 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.506670 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.006633675 +0000 UTC m=+148.967880718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506807 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506834 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.507279 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.007266903 +0000 UTC m=+148.968513936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.570741 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.572464 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.573417 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:32 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:32 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:32 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.573460 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.575211 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.591639 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.609799 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.611616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.611742 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.611940 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.612639 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.612763 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.112741811 +0000 UTC m=+149.073988854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.613095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.652229 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.699130 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715677 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715757 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715950 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.716504 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.216485062 +0000 UTC m=+149.177732175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.762708 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" event={"ID":"66910c2a-724c-42a8-8511-a8ee6de7d140","Type":"ContainerStarted","Data":"f456ac2434eebd53eebc53e96333fc8771412d72ac2190c266bcbcce812eddf3"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.782517 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.785689 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.789734 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.793388 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" event={"ID":"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d","Type":"ContainerStarted","Data":"61fc6a0f3fda396062e59232a12907a401403d66450e2ec645447a2e479e0077"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.794409 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" podStartSLOduration=127.794386207 podStartE2EDuration="2m7.794386207s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.793250775 +0000 UTC m=+148.754497808" watchObservedRunningTime="2026-01-30 21:42:32.794386207 +0000 UTC m=+148.755633240" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.817426 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.818260 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.318227646 +0000 UTC m=+149.279474689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830177 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830361 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830507 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830576 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.819139 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" event={"ID":"4f0c12f1-c780-4020-921b-11e410503db3","Type":"ContainerStarted","Data":"e42646c812528d15f2a790d8d81db7668ecda68e0f00345061af9f7e816e05ed"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.831589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.831894 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.331883094 +0000 UTC m=+149.293130127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.832341 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.868463 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lbd69" event={"ID":"7a7b036f-4e32-47e9-b700-da7ef3615e4f","Type":"ContainerStarted","Data":"214b5add41548eca3427a4f06d3aa6644796d2a8a07422af3e97536ab39cff51"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.879241 4979 generic.go:334] "Generic (PLEG): container finished" podID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerID="72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727" exitCode=0 Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.879374 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerDied","Data":"72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.908930 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" event={"ID":"e7334e56-32c0-40f4-b60d-afab26024b6a","Type":"ContainerStarted","Data":"c2d2e24f3144b21e04d2e032631ac585cd9982d28b6ed4f4ae367e274993d023"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.921124 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.929391 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" podStartSLOduration=127.929368662 podStartE2EDuration="2m7.929368662s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.894632881 +0000 UTC m=+148.855879914" watchObservedRunningTime="2026-01-30 21:42:32.929368662 +0000 UTC m=+148.890615695" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.931675 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" podStartSLOduration=127.931656795 podStartE2EDuration="2m7.931656795s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.927589422 +0000 UTC m=+148.888836455" watchObservedRunningTime="2026-01-30 21:42:32.931656795 +0000 UTC m=+148.892903828" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.932122 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerStarted","Data":"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.933748 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935590 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935890 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935918 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.936114 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.936270 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.436248082 +0000 UTC m=+149.397495115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.942885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l44fm" event={"ID":"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1","Type":"ContainerStarted","Data":"13a51e61149fc5f98737cac1ce5720f99a121ed742b47872bf041837740284fa"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.944222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.944633 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.952551 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.953440 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" event={"ID":"df702c9e-2d17-476e-9bbe-d41784bf809b","Type":"ContainerStarted","Data":"18b5e546aa644f863cdb950a2d9deb6a5a67659887d3244885a8dc6589b88ed4"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.966342 4979 patch_prober.go:28] interesting pod/console-operator-58897d9998-l44fm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.966439 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l44fm" podUID="45cde1ce-04ec-4fdd-bfc0-10d072a9eff1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.967062 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" event={"ID":"531bdeb2-b55c-4a3b-8fb5-1dca8478c479","Type":"ContainerStarted","Data":"b376144d1e0d4b970f24cae31a0716b254f7d2db6ac1cd2a3feb27398072c429"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.979350 4979 generic.go:334] "Generic (PLEG): container finished" podID="d768fc5d-52c2-4901-a7cd-759d26f88251" containerID="7556cbd99f95ccceb7bf0d0fac7c1b3772888d4d04c3a35219fe9953959db2aa" exitCode=0 Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.979428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" event={"ID":"d768fc5d-52c2-4901-a7cd-759d26f88251","Type":"ContainerDied","Data":"7556cbd99f95ccceb7bf0d0fac7c1b3772888d4d04c3a35219fe9953959db2aa"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.988492 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" podStartSLOduration=127.988462347 podStartE2EDuration="2m7.988462347s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.987370286 +0000 UTC m=+148.948617339" watchObservedRunningTime="2026-01-30 21:42:32.988462347 +0000 UTC m=+148.949709380" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.006164 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lbd69" podStartSLOduration=9.006135116 podStartE2EDuration="9.006135116s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.005691553 +0000 UTC m=+148.966938596" watchObservedRunningTime="2026-01-30 21:42:33.006135116 +0000 UTC m=+148.967382149" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053685 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053733 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.054620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" event={"ID":"2063d8fc-0614-40e7-be84-ebfbda9acd89","Type":"ContainerStarted","Data":"323a814f514a92a3735576883357a300caae64e4a2f5b2949c141b5734b46ee2"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.061111 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.064263 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.064723 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.070924 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.570902978 +0000 UTC m=+149.532150011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.074915 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" event={"ID":"5ec159e5-6cc8-4130-a83c-ad402c63e175","Type":"ContainerStarted","Data":"35565181a174192acb9291137cecf5a45f764a491611e072d2e49017c51cf2de"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.074980 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" event={"ID":"5ec159e5-6cc8-4130-a83c-ad402c63e175","Type":"ContainerStarted","Data":"96f32a15d80a8ac744380345ad4913ecb6cb35036100651001af43125a344cb9"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.084449 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l44fm" podStartSLOduration=128.084415981 podStartE2EDuration="2m8.084415981s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.075720511 +0000 UTC m=+149.036967554" watchObservedRunningTime="2026-01-30 21:42:33.084415981 +0000 UTC m=+149.045663014" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.091549 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.093794 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.100217 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.108934 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.109009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" event={"ID":"ed73bac2-f781-4475-b265-8c8820d10e3b","Type":"ContainerStarted","Data":"5f287c47abbe3b8b7dfd6f717716db44a0acfa82ad6eb5ab897e1a29ea1257c9"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.113462 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.143718 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-464m7" event={"ID":"ebc2a677-6e7a-41ce-a3f4-063acddaa66b","Type":"ContainerStarted","Data":"6cf9b18344b975c59991ebbe8d277764baa5dcef570d47a0584cd92ec3e70129"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.146978 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.147961 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148004 4979 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6285m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148059 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" podUID="6952a3c6-a471-489c-ba9a-9e4b5e9ac362" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148109 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148462 4979 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lzp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148557 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.154876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.155812 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.655788716 +0000 UTC m=+149.617035749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.184851 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.184929 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.185233 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.185185 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-h6sv5" podStartSLOduration=128.185158309 podStartE2EDuration="2m8.185158309s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.163048168 +0000 UTC m=+149.124295201" watchObservedRunningTime="2026-01-30 21:42:33.185158309 +0000 UTC m=+149.146405342" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.231716 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" podStartSLOduration=128.231686497 podStartE2EDuration="2m8.231686497s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.216925048 +0000 UTC m=+149.178172081" watchObservedRunningTime="2026-01-30 21:42:33.231686497 +0000 UTC m=+149.192933530" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.258618 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.262997 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.762975852 +0000 UTC m=+149.724222885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: W0130 21:42:33.285491 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ceea51c_f0b8_4de3_be53_f1d857b3a1b8.slice/crio-295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20 WatchSource:0}: Error finding container 295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20: Status 404 returned error can't find the container with id 295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20 Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.285854 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.330944 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" podStartSLOduration=128.330915441 podStartE2EDuration="2m8.330915441s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.279083897 +0000 UTC m=+149.240330930" watchObservedRunningTime="2026-01-30 21:42:33.330915441 +0000 UTC m=+149.292162484" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.357647 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podStartSLOduration=128.357614541 podStartE2EDuration="2m8.357614541s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.328215928 +0000 UTC m=+149.289462991" watchObservedRunningTime="2026-01-30 21:42:33.357614541 +0000 UTC m=+149.318861574" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.360150 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.360764 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.860738967 +0000 UTC m=+149.821986000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.410150 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" podStartSLOduration=128.410125403 podStartE2EDuration="2m8.410125403s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.372636306 +0000 UTC m=+149.333883369" watchObservedRunningTime="2026-01-30 21:42:33.410125403 +0000 UTC m=+149.371372436" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.423595 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.456828 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" podStartSLOduration=128.456799134 podStartE2EDuration="2m8.456799134s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.438388115 +0000 UTC m=+149.399635178" watchObservedRunningTime="2026-01-30 21:42:33.456799134 +0000 UTC m=+149.418046167" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.461573 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.462150 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.962130242 +0000 UTC m=+149.923377275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.490072 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-464m7" podStartSLOduration=9.490048255 podStartE2EDuration="9.490048255s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.485562391 +0000 UTC m=+149.446809444" watchObservedRunningTime="2026-01-30 21:42:33.490048255 +0000 UTC m=+149.451295288" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.566260 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.566769 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.066746177 +0000 UTC m=+150.027993210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.573181 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" podStartSLOduration=128.573128823 podStartE2EDuration="2m8.573128823s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.558611222 +0000 UTC m=+149.519858265" watchObservedRunningTime="2026-01-30 21:42:33.573128823 +0000 UTC m=+149.534375876" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.652184 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.668400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.668822 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.168805111 +0000 UTC m=+150.130052144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.726113 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:33 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:33 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:33 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.726176 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.772814 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.773367 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.273331532 +0000 UTC m=+150.234578565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.887959 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.891628 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.391594775 +0000 UTC m=+150.352841808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.994047 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.994569 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.494534042 +0000 UTC m=+150.455781075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.994840 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.995275 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.495253533 +0000 UTC m=+150.456500586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.095968 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.096890 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.097303 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.597287226 +0000 UTC m=+150.558534259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.199153 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" event={"ID":"531bdeb2-b55c-4a3b-8fb5-1dca8478c479","Type":"ContainerStarted","Data":"fd0f76e38ac5b45b32d6f319fc6ab94eb270980e71cc410c3327587638df7123"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.210416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.210831 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.710815487 +0000 UTC m=+150.672062520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.227510 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" event={"ID":"d768fc5d-52c2-4901-a7cd-759d26f88251","Type":"ContainerStarted","Data":"93fb7b341146c630d112b7a2050a9dcbc16bec742e82a3ba83c50f275ec23952"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.228201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.233312 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9ca1d9085d56c9080a077e82122e2f38cba69a3090232fc00cc7acb0b68a10c4"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.259428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerStarted","Data":"295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.268149 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" podStartSLOduration=129.268125463 podStartE2EDuration="2m9.268125463s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:34.265832309 +0000 UTC m=+150.227079342" watchObservedRunningTime="2026-01-30 21:42:34.268125463 +0000 UTC m=+150.229372496" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.279413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" event={"ID":"e7334e56-32c0-40f4-b60d-afab26024b6a","Type":"ContainerStarted","Data":"cb7d83055f15431e65b94ffbcef8b7f017093ccde5577adad7fa2c1ba83772fb"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.321916 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"1b3e92d4f597c50514a726bce1f0da466e1e891ed10de47ef04906e7508ae0f8"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.326127 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.326217 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.826197919 +0000 UTC m=+150.787444952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.344705 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.346810 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.846777939 +0000 UTC m=+150.808024972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.374678 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerStarted","Data":"9e701107804895c162dc5dbfb55c5fb4850bb1995cf07bbee85bb8f8a3ce5a6f"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.396473 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerStarted","Data":"ef80ed7d6ea466150a57b7d4595c84c46d03f43e54dcb40334059a4c99c74be3"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.398663 4979 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lzp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.398740 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.405022 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.405138 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.406222 4979 patch_prober.go:28] interesting pod/console-operator-58897d9998-l44fm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.406528 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l44fm" podUID="45cde1ce-04ec-4fdd-bfc0-10d072a9eff1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.426750 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.447338 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.449558 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.949510551 +0000 UTC m=+150.910757584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.527958 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.529730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.536154 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.550004 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.563277 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.567171 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.067149406 +0000 UTC m=+151.028396439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.573131 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.574301 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:34 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:34 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:34 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.574521 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.623160 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.662754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.663193 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.163165393 +0000 UTC m=+151.124412426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.663244 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.663468 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.663503 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.764899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.764981 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.765008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.765067 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.765539 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.265517974 +0000 UTC m=+151.226765007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.766418 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.766658 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.786594 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.787960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.836465 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.855929 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.867080 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.867460 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.367438525 +0000 UTC m=+151.328685558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.871126 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972222 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972290 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972327 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.972736 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.472717967 +0000 UTC m=+151.433965000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.073382 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.073508 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.573479045 +0000 UTC m=+151.534726078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074014 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074110 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.074402 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.574386301 +0000 UTC m=+151.535633334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.075290 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.075355 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.121365 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.153448 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.177377 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.177834 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.677808642 +0000 UTC m=+151.639055675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.227218 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.283654 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.284548 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.784533495 +0000 UTC m=+151.745780528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386745 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386831 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.386975 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.886937938 +0000 UTC m=+151.848184971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.387338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.387913 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.887892205 +0000 UTC m=+151.849139238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.389575 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.395839 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume" (OuterVolumeSpecName: "config-volume") pod "b43f94f0-791b-49cc-afe0-95ec18aa1f07" (UID: "b43f94f0-791b-49cc-afe0-95ec18aa1f07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.405522 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.408778 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.409660 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerName="collect-profiles" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.409746 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerName="collect-profiles" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.410487 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerName="collect-profiles" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.414013 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.430258 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr" (OuterVolumeSpecName: "kube-api-access-2b7tr") pod "b43f94f0-791b-49cc-afe0-95ec18aa1f07" (UID: "b43f94f0-791b-49cc-afe0-95ec18aa1f07"). InnerVolumeSpecName "kube-api-access-2b7tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.435673 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.439107 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.442468 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b43f94f0-791b-49cc-afe0-95ec18aa1f07" (UID: "b43f94f0-791b-49cc-afe0-95ec18aa1f07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.451567 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.457792 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.457866 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.458267 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.458445 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.493355 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.494245 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.494352 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.494437 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.494636 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.994613208 +0000 UTC m=+151.955860241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.499152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69a009f35ded371aabf5b7792a76efbd01c8b7cef3c7f0785e7abc9f88921676"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.499282 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"126421ffab26a0306583a4d4b26dc1a88feae26774f51960306aee1d9d068837"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.500854 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.517361 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerDied","Data":"f9092fc40924a5c4c5ccda219effa1674a3cd66531deeb6ed63c03f809984b37"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.517413 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9092fc40924a5c4c5ccda219effa1674a3cd66531deeb6ed63c03f809984b37" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.517569 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.548403 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"31bfef1cf6782c630454f26fc196708d03ea5fd5e3bc34fe717e150e46e5924b"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.558811 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.558953 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.577348 4979 generic.go:334] "Generic (PLEG): container finished" podID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.577466 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.578558 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:35 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:35 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:35 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.578595 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.587441 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.588818 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603160 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603250 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603387 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603441 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.604507 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.104479947 +0000 UTC m=+152.065727180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.605456 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerID="ac193c08f8b37b1caaa0e8f2fd6642d2080bfcadd0f1988fbb608a5fad551f06" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.605586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"ac193c08f8b37b1caaa0e8f2fd6642d2080bfcadd0f1988fbb608a5fad551f06"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.657196 4979 generic.go:334] "Generic (PLEG): container finished" podID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerID="bf235c47905ef6c38fcc7f3601d64c6f0ba215a6796ab2b1da97239f211b40de" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.657276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"bf235c47905ef6c38fcc7f3601d64c6f0ba215a6796ab2b1da97239f211b40de"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.657306 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerStarted","Data":"897e930b920945770fe85e65189da3f41f538afe25ecb7f6857d9256eed7d54a"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.670788 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.672411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"68ecc18b7726954e2cba0cf9dcf99d9f08243cc72271afa623e628583a74076f"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.672463 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"766900bad4d97d184ca34531072e3c7ee0435fdc17acce2b96276917ce8decc7"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.704690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705368 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705477 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705531 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705572 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705605 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.706369 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.206329845 +0000 UTC m=+152.167576888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.706873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.706947 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.780557 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810118 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810178 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810292 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810317 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.810692 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.310675083 +0000 UTC m=+152.271922116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.814316 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.814538 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.818098 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.897501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.917570 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.918099 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.418061883 +0000 UTC m=+152.379308916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.992619 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.019521 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.019971 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.519949702 +0000 UTC m=+152.481196735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.129817 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.129918 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.629890395 +0000 UTC m=+152.591137428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.141199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.141693 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.641677351 +0000 UTC m=+152.602924384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.179337 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.242887 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.243417 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.743392465 +0000 UTC m=+152.704639498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.292336 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.344859 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.345277 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.845263493 +0000 UTC m=+152.806510526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.446610 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.446880 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.946833844 +0000 UTC m=+152.908080887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.447462 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.447856 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.947840992 +0000 UTC m=+152.909088025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.548341 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.548473 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.048452706 +0000 UTC m=+153.009699739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.548769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.549087 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.049080523 +0000 UTC m=+153.010327556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.577201 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:36 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:36 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:36 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.577270 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.650051 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.650539 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.150516339 +0000 UTC m=+153.111763372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.704788 4979 generic.go:334] "Generic (PLEG): container finished" podID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerID="79a85f996439ff844121a3f1030805086e2c3395fd9f9a97d7660f7b7319ecdd" exitCode=0 Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.705315 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"79a85f996439ff844121a3f1030805086e2c3395fd9f9a97d7660f7b7319ecdd"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.705355 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerStarted","Data":"69b34253c166acfc981a0414523d053e63aae7c6e06110f5fe68cf8028008964"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.729492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerStarted","Data":"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.729541 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerStarted","Data":"4c62920e03a89d4d5765a230e2b55c002afe184d080ace3bcaa5b06f8f97c1f4"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.757514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.757907 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.25789316 +0000 UTC m=+153.219140193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.858893 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.859655 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.359633675 +0000 UTC m=+153.320880708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.859985 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.872135 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.372120291 +0000 UTC m=+153.333367324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.890888 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.923923 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.945998 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.962201 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.962736 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.462717407 +0000 UTC m=+153.423964440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.063781 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.064178 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.564164824 +0000 UTC m=+153.525411857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.165166 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.165752 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.665732094 +0000 UTC m=+153.626979127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.266761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.267165 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.767148801 +0000 UTC m=+153.728395834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.368133 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.368268 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.868245268 +0000 UTC m=+153.829492301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.368850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.369247 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.869237135 +0000 UTC m=+153.830484168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.470253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.470693 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.970667612 +0000 UTC m=+153.931914645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.563013 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.569251 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.572298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.572771 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.072752206 +0000 UTC m=+154.033999239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.652829 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.653708 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.675105 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.677324 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.177278779 +0000 UTC m=+154.138525812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.682877 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.683214 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.694567 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.779600 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.779727 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.779760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.780263 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.280239067 +0000 UTC m=+154.241486090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.784369 4979 generic.go:334] "Generic (PLEG): container finished" podID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" exitCode=0 Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.784539 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.807469 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"ef234ccd91820f9c9ec287127934f83d0c0b7196cf7358d463dd2dca8996f477"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.810939 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerStarted","Data":"87982f21eeaee850aff8e29886551952617d82411b159837b48e46f7e706dfb9"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.837720 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerStarted","Data":"2225585b885540daf5c8798c55ba2f9f3246f245430840cea94336a10b265b9b"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.847922 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.888171 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.888391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.888417 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.888910 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.388890533 +0000 UTC m=+154.350137566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.890240 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.917408 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.989530 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.990045 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.490007351 +0000 UTC m=+154.451254384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.001629 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.057241 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.097629 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.097971 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.597944477 +0000 UTC m=+154.559191510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.098024 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.098473 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.598463822 +0000 UTC m=+154.559710855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.201684 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.201919 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.701876753 +0000 UTC m=+154.663123796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.201972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.202497 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.70247775 +0000 UTC m=+154.663724783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.304138 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.304945 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.804922104 +0000 UTC m=+154.766169137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.408231 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.408710 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.908685625 +0000 UTC m=+154.869932658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.511158 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.511700 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.011649764 +0000 UTC m=+154.972896797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.561783 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.614415 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.614793 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.114779797 +0000 UTC m=+155.076026830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.662436 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.715211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.716618 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.216600945 +0000 UTC m=+155.177847978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.819302 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.819691 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.319677627 +0000 UTC m=+155.280924660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.892465 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.892523 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.903695 4979 patch_prober.go:28] interesting pod/console-f9d7485db-h6sv5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.904714 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h6sv5" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.914538 4979 generic.go:334] "Generic (PLEG): container finished" podID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" exitCode=0 Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.914669 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c"} Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.921249 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.921745 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.42171747 +0000 UTC m=+155.382964503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.921848 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.926997 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.426981685 +0000 UTC m=+155.388228718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.998490 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerStarted","Data":"c55632324b36c1cf998f1fe1dace9e343e9837a67e54aa2706722b006b062334"} Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.027743 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.028281 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.528258308 +0000 UTC m=+155.489505331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.129589 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.130068 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.630048464 +0000 UTC m=+155.591295497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.130869 4979 generic.go:334] "Generic (PLEG): container finished" podID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" exitCode=0 Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.134081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d"} Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.231172 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.231282 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.731258135 +0000 UTC m=+155.692505168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.231722 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.244593 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.744567082 +0000 UTC m=+155.705814115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.332803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.333194 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.833172784 +0000 UTC m=+155.794419817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.434223 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.434708 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.934691353 +0000 UTC m=+155.895938386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.535373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.536247 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.036225522 +0000 UTC m=+155.997472555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.639634 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.640253 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.140174729 +0000 UTC m=+156.101421762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.743225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.744268 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.243780345 +0000 UTC m=+156.205027378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.790563 4979 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.845588 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.846323 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.346295321 +0000 UTC m=+156.307542354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.946962 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.949055 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.449015263 +0000 UTC m=+156.410262296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.048936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.049351 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.549335799 +0000 UTC m=+156.510582832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.151626 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.151842 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.651811424 +0000 UTC m=+156.613058457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.152249 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.153167 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.653149721 +0000 UTC m=+156.614396754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.162294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"5ec8c9bf60a3611a420968ee95c2d6718cfea93e6bf42a3a9fef27d42b1287e6"} Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.252826 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.253066 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.753012134 +0000 UTC m=+156.714259167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.253318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.254661 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.75465192 +0000 UTC m=+156.715898953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.285617 4979 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T21:42:39.790617051Z","Handler":null,"Name":""} Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.293135 4979 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.293186 4979 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.354077 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.358682 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.455705 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.489587 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.489740 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.537853 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.724308 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.079635 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.201134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerStarted","Data":"ef2d34dbe946d828987a41a728d8d2e42578f678fbc80dd5cead01215db34bdf"} Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.214640 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.217233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"4c74a85b0c156946a2d7c22e86e38054a855adbf4c322731e01b031c8aec76ec"} Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.260084 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" podStartSLOduration=17.260056468 podStartE2EDuration="17.260056468s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:41.258620438 +0000 UTC m=+157.219867471" watchObservedRunningTime="2026-01-30 21:42:41.260056468 +0000 UTC m=+157.221303501" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.264347 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.264322196 podStartE2EDuration="4.264322196s" podCreationTimestamp="2026-01-30 21:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:41.227904588 +0000 UTC m=+157.189151651" watchObservedRunningTime="2026-01-30 21:42:41.264322196 +0000 UTC m=+157.225569229" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.490713 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.497470 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.499249 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.503391 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.503685 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.572911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.573070 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.675210 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.675323 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.675664 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.708115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.815867 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.311456 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerStarted","Data":"772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb"} Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.311832 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerStarted","Data":"7b232422461df3a64ba9f7d1e8e42a5bbd92a1d12e44b90cbcab93e3d93f6389"} Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.311854 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.335982 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.350247 4979 generic.go:334] "Generic (PLEG): container finished" podID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerID="ef2d34dbe946d828987a41a728d8d2e42578f678fbc80dd5cead01215db34bdf" exitCode=0 Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.350349 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerDied","Data":"ef2d34dbe946d828987a41a728d8d2e42578f678fbc80dd5cead01215db34bdf"} Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.357849 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" podStartSLOduration=137.357828241 podStartE2EDuration="2m17.357828241s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:42.3545407 +0000 UTC m=+158.315787723" watchObservedRunningTime="2026-01-30 21:42:42.357828241 +0000 UTC m=+158.319075274" Jan 30 21:42:42 crc kubenswrapper[4979]: W0130 21:42:42.363562 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod55091f68_f13e_49c0_9b8a_3285b7eddb4b.slice/crio-633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97 WatchSource:0}: Error finding container 633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97: Status 404 returned error can't find the container with id 633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97 Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.635489 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.382993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerStarted","Data":"633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97"} Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.717704 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831152 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831259 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831416 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" (UID: "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831895 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.838680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" (UID: "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.933531 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.404941 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.404929 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerDied","Data":"c55632324b36c1cf998f1fe1dace9e343e9837a67e54aa2706722b006b062334"} Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.405657 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55632324b36c1cf998f1fe1dace9e343e9837a67e54aa2706722b006b062334" Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.408722 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerStarted","Data":"60c8506ebd3462a1feb71dee8fcf85525328c298ce8cf6229738956b544217d6"} Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.438323 4979 generic.go:334] "Generic (PLEG): container finished" podID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerID="60c8506ebd3462a1feb71dee8fcf85525328c298ce8cf6229738956b544217d6" exitCode=0 Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.438736 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerDied","Data":"60c8506ebd3462a1feb71dee8fcf85525328c298ce8cf6229738956b544217d6"} Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443713 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443822 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443834 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443933 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.847636 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.857099 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.890806 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.895620 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:49 crc kubenswrapper[4979]: I0130 21:42:49.086019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.003565 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.093674 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.093917 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.096770 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "55091f68-f13e-49c0-9b8a-3285b7eddb4b" (UID: "55091f68-f13e-49c0-9b8a-3285b7eddb4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.100304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "55091f68-f13e-49c0-9b8a-3285b7eddb4b" (UID: "55091f68-f13e-49c0-9b8a-3285b7eddb4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.195125 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.195171 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.481780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerDied","Data":"633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97"} Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.481836 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.481913 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.445397 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.445488 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446002 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446130 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446221 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446893 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446963 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.447292 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d"} pod="openshift-console/downloads-7954f5f757-hwb2t" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.447406 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" containerID="cri-o://1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d" gracePeriod=2 Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.653074 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.653383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" containerID="cri-o://0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e" gracePeriod=30 Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.663324 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.663637 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" containerID="cri-o://1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c" gracePeriod=30 Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.787980 4979 generic.go:334] "Generic (PLEG): container finished" podID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerID="1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d" exitCode=0 Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.788098 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerDied","Data":"1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d"} Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.790429 4979 generic.go:334] "Generic (PLEG): container finished" podID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerID="0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e" exitCode=0 Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.790480 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerDied","Data":"0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e"} Jan 30 21:42:58 crc kubenswrapper[4979]: I0130 21:42:58.635013 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:42:58 crc kubenswrapper[4979]: I0130 21:42:58.635114 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:42:59 crc kubenswrapper[4979]: I0130 21:42:59.802975 4979 generic.go:334] "Generic (PLEG): container finished" podID="828e6466-447a-47f9-9727-3992db7c27c9" containerID="1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c" exitCode=0 Jan 30 21:42:59 crc kubenswrapper[4979]: I0130 21:42:59.803024 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerDied","Data":"1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c"} Jan 30 21:43:00 crc kubenswrapper[4979]: I0130 21:43:00.732624 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:43:02 crc kubenswrapper[4979]: I0130 21:43:02.040423 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:43:02 crc kubenswrapper[4979]: I0130 21:43:02.041228 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:43:05 crc kubenswrapper[4979]: I0130 21:43:05.445220 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:05 crc kubenswrapper[4979]: I0130 21:43:05.445731 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:06 crc kubenswrapper[4979]: I0130 21:43:06.275563 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 30 21:43:06 crc kubenswrapper[4979]: I0130 21:43:06.275682 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 30 21:43:07 crc kubenswrapper[4979]: I0130 21:43:07.951351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:43:08 crc kubenswrapper[4979]: I0130 21:43:08.635744 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:43:08 crc kubenswrapper[4979]: I0130 21:43:08.635830 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:43:13 crc kubenswrapper[4979]: I0130 21:43:13.212222 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:15 crc kubenswrapper[4979]: I0130 21:43:15.443188 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:15 crc kubenswrapper[4979]: I0130 21:43:15.443658 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:17 crc kubenswrapper[4979]: I0130 21:43:17.275495 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:17 crc kubenswrapper[4979]: I0130 21:43:17.275586 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.290397 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.290963 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nls66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wjwlb_openshift-marketplace(cfb214a7-6df6-4fd6-a74c-db4f38b0a086): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.292491 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wjwlb" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.866602 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.866902 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.866922 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.866946 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.866955 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.867101 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.867119 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.867642 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.871731 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.871905 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.881733 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.996948 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.997078 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.098419 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.098529 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.098655 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.123213 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.195968 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.533637 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wjwlb" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.540508 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3114423416/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.540867 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqrjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qmzzl_openshift-marketplace(2b857a3f-c3a5-4851-ba1e-25d9dbc64de5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3114423416/2\": happened during read: context canceled" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.542650 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage3114423416/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qmzzl" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.636106 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.636212 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.293607 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.294239 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqdhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-npfvh_openshift-marketplace(568a44ae-c892-48a7-b4c0-2d83606e7b95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.295488 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-npfvh" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.654547 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1514567063/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.654741 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmmtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2tvd8_openshift-marketplace(3641ad73-644b-4d71-860b-4d8b7e6a3a6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1514567063/2\": happened during read: context canceled" logger="UnhandledError" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.656229 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1514567063/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-2tvd8" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.669584 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.670920 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.676271 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.864371 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.864457 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.864756 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965778 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965884 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965904 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.966113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.986000 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:24 crc kubenswrapper[4979]: I0130 21:43:24.006789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:24 crc kubenswrapper[4979]: E0130 21:43:24.611803 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:43:24 crc kubenswrapper[4979]: E0130 21:43:24.613180 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rtgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dk444_openshift-marketplace(6ceea51c-f0b8-4de3-be53-f1d857b3a1b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:24 crc kubenswrapper[4979]: E0130 21:43:24.614393 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dk444" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" Jan 30 21:43:25 crc kubenswrapper[4979]: I0130 21:43:25.444119 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:25 crc kubenswrapper[4979]: I0130 21:43:25.444245 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.274653 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.274813 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930359 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2tvd8" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930507 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qmzzl" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930534 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-npfvh" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930688 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dk444" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.997544 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerDied","Data":"2fdea5ec5c945a9b137321bd0204027de83c52d16c6cd7e9cca2d07e312e0fe5"} Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.997629 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fdea5ec5c945a9b137321bd0204027de83c52d16c6cd7e9cca2d07e312e0fe5" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.000105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerDied","Data":"deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096"} Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.000139 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.010758 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.015757 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.041755 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:43:28 crc kubenswrapper[4979]: E0130 21:43:28.042074 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042091 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: E0130 21:43:28.042108 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042118 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042339 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042359 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042835 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.059046 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.134997 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135104 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135220 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135267 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135314 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135434 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136199 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136332 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136422 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config" (OuterVolumeSpecName: "config") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136765 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136831 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136903 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config" (OuterVolumeSpecName: "config") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137422 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137572 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137607 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137627 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137648 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.141462 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.141597 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl" (OuterVolumeSpecName: "kube-api-access-j27sl") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "kube-api-access-j27sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.141982 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz" (OuterVolumeSpecName: "kube-api-access-frqrz") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "kube-api-access-frqrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.142880 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.238818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239003 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239086 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239182 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239312 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239334 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239353 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239372 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239389 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.241645 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.242915 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.246630 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.274944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.381077 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.006510 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.010015 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.051674 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.056656 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.066755 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.077877 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828e6466-447a-47f9-9727-3992db7c27c9" path="/var/lib/kubelet/pods/828e6466-447a-47f9-9727-3992db7c27c9/volumes" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.078680 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.119433 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.119924 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hzvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-454jj_openshift-marketplace(82df7d39-6821-4916-b8c9-534688ca3d5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.121305 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-454jj" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.256741 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.256925 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snrx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-krrkl_openshift-marketplace(9ced41eb-6843-4dfe-81c7-267a56f75a73): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.258414 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-krrkl" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.461780 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.462698 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.465479 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.465719 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.466818 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.466987 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.467265 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.468181 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.468406 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.473295 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573798 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573853 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573906 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573931 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675360 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675432 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675517 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675586 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.676498 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.676883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.691186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.698483 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.701705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.784553 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:31 crc kubenswrapper[4979]: I0130 21:43:31.077675 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" path="/var/lib/kubelet/pods/ff61cd4b-2b9f-4588-be96-10038ccc4a92/volumes" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.039543 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.039618 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.039672 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.040350 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.040427 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d" gracePeriod=600 Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.907270 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-454jj" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.907267 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-krrkl" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.951540 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.951816 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gc7xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sg6j7_openshift-marketplace(444df6ed-3c43-4310-adc6-69ab0a9ea702): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.953715 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sg6j7" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.049298 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d" exitCode=0 Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.050885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d"} Jan 30 21:43:34 crc kubenswrapper[4979]: E0130 21:43:34.059405 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sg6j7" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.409847 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pk47q"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.457825 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.461063 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.504078 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.515058 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:43:34 crc kubenswrapper[4979]: W0130 21:43:34.605607 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc138f389_e49e_4c26_b2ee_af169b1c8343.slice/crio-18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894 WatchSource:0}: Error finding container 18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894: Status 404 returned error can't find the container with id 18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894 Jan 30 21:43:34 crc kubenswrapper[4979]: W0130 21:43:34.607530 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podceeab3d6_4012_4d7b_ae04_fc3829fafd53.slice/crio-086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29 WatchSource:0}: Error finding container 086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29: Status 404 returned error can't find the container with id 086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29 Jan 30 21:43:34 crc kubenswrapper[4979]: W0130 21:43:34.614006 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cce4b7_6306_43b1_8e2d_e4a29ec3bd6b.slice/crio-f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77 WatchSource:0}: Error finding container f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77: Status 404 returned error can't find the container with id f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77 Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.058980 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerStarted","Data":"18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.060878 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pk47q" event={"ID":"d0632938-c88a-4c22-b0e7-8f7473532f07","Type":"ContainerStarted","Data":"cea6153ed06c9de30a36b68e36f6f2955a3c78913f12aaf4be6fdc08005dafe9"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.062507 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerStarted","Data":"f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.063922 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerStarted","Data":"730bda6f6ba79a0d724889d0d885e5fa44125a1c153bf8e55571376fa265a6a7"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.065182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerStarted","Data":"086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.069659 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerStarted","Data":"6ff5511a83d2a904767161a15f8ce1841610d209646439ad72a484c4a5b712cd"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.072212 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.072300 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.100492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.100586 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.443779 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.444341 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.443873 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.444464 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.100573 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerStarted","Data":"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.101392 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.106679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerStarted","Data":"781bc5e2c22325e2f70b4c7a950fbfb8ab9d6654a493cdce31f3a0b0d7a6013c"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.107204 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.109121 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerStarted","Data":"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.110017 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.112783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pk47q" event={"ID":"d0632938-c88a-4c22-b0e7-8f7473532f07","Type":"ContainerStarted","Data":"73784d244c4b4f76027150ba2267d5951c452f9b420bcd2ddc54ad5d64244ccb"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.112812 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pk47q" event={"ID":"d0632938-c88a-4c22-b0e7-8f7473532f07","Type":"ContainerStarted","Data":"5d52b8c7e5fa9243c158fc3cb8200cb6a4f35c94b30d14448faa99dfa0683260"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.115783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerStarted","Data":"8fde1572bb636a4d23b5e24e14b788b050124ee0ad5e961a05afc8ed632de43e"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.116595 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.116635 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.117467 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.127820 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" podStartSLOduration=20.127792969 podStartE2EDuration="20.127792969s" podCreationTimestamp="2026-01-30 21:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.124456176 +0000 UTC m=+212.085703209" watchObservedRunningTime="2026-01-30 21:43:36.127792969 +0000 UTC m=+212.089040012" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.157516 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" podStartSLOduration=20.157499098 podStartE2EDuration="20.157499098s" podCreationTimestamp="2026-01-30 21:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.156872241 +0000 UTC m=+212.118119274" watchObservedRunningTime="2026-01-30 21:43:36.157499098 +0000 UTC m=+212.118746121" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.194429 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pk47q" podStartSLOduration=191.194387607 podStartE2EDuration="3m11.194387607s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.191618699 +0000 UTC m=+212.152865742" watchObservedRunningTime="2026-01-30 21:43:36.194387607 +0000 UTC m=+212.155634640" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.216968 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=13.216943196 podStartE2EDuration="13.216943196s" podCreationTimestamp="2026-01-30 21:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.21674522 +0000 UTC m=+212.177992273" watchObservedRunningTime="2026-01-30 21:43:36.216943196 +0000 UTC m=+212.178190229" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.235868 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=18.235846284 podStartE2EDuration="18.235846284s" podCreationTimestamp="2026-01-30 21:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.235245966 +0000 UTC m=+212.196492999" watchObservedRunningTime="2026-01-30 21:43:36.235846284 +0000 UTC m=+212.197093317" Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.132069 4979 generic.go:334] "Generic (PLEG): container finished" podID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerID="6777c7a712aaeb3b92c712ea13c14e93a0636f80d815df1f08df98f2e3cc68fe" exitCode=0 Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.132170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"6777c7a712aaeb3b92c712ea13c14e93a0636f80d815df1f08df98f2e3cc68fe"} Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.134630 4979 generic.go:334] "Generic (PLEG): container finished" podID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerID="781bc5e2c22325e2f70b4c7a950fbfb8ab9d6654a493cdce31f3a0b0d7a6013c" exitCode=0 Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.134758 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerDied","Data":"781bc5e2c22325e2f70b4c7a950fbfb8ab9d6654a493cdce31f3a0b0d7a6013c"} Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.138480 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.138755 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.144591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerStarted","Data":"66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7"} Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.164679 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wjwlb" podStartSLOduration=3.300609671 podStartE2EDuration="1m4.164646854s" podCreationTimestamp="2026-01-30 21:42:34 +0000 UTC" firstStartedPulling="2026-01-30 21:42:36.707860465 +0000 UTC m=+152.669107498" lastFinishedPulling="2026-01-30 21:43:37.571897638 +0000 UTC m=+213.533144681" observedRunningTime="2026-01-30 21:43:38.161799135 +0000 UTC m=+214.123046168" watchObservedRunningTime="2026-01-30 21:43:38.164646854 +0000 UTC m=+214.125893917" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.407194 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.493679 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.493774 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.493921 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ceeab3d6-4012-4d7b-ae04-fc3829fafd53" (UID: "ceeab3d6-4012-4d7b-ae04-fc3829fafd53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.494099 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.505797 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ceeab3d6-4012-4d7b-ae04-fc3829fafd53" (UID: "ceeab3d6-4012-4d7b-ae04-fc3829fafd53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.595454 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4979]: I0130 21:43:39.154124 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerDied","Data":"086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29"} Jan 30 21:43:39 crc kubenswrapper[4979]: I0130 21:43:39.154179 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29" Jan 30 21:43:39 crc kubenswrapper[4979]: I0130 21:43:39.154994 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:44 crc kubenswrapper[4979]: I0130 21:43:44.871587 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:44 crc kubenswrapper[4979]: I0130 21:43:44.872166 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:45 crc kubenswrapper[4979]: I0130 21:43:45.443153 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:45 crc kubenswrapper[4979]: I0130 21:43:45.457889 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:43:45 crc kubenswrapper[4979]: I0130 21:43:45.513388 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.224391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerStarted","Data":"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.229062 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.229087 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.232419 4979 generic.go:334] "Generic (PLEG): container finished" podID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.232484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.235708 4979 generic.go:334] "Generic (PLEG): container finished" podID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.235741 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.238333 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerID="9c8374b15b5619f4f1304cf75cea07e98769e40d36978831645aa6ad442f9748" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.238418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"9c8374b15b5619f4f1304cf75cea07e98769e40d36978831645aa6ad442f9748"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.241823 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerStarted","Data":"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.244176 4979 generic.go:334] "Generic (PLEG): container finished" podID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerID="8ce38f5c2d102434af1616c327c364faa35dac4f176a6f600fbf112072871235" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.244217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"8ce38f5c2d102434af1616c327c364faa35dac4f176a6f600fbf112072871235"} Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.255382 4979 generic.go:334] "Generic (PLEG): container finished" podID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" exitCode=0 Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.255468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126"} Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.258690 4979 generic.go:334] "Generic (PLEG): container finished" podID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" exitCode=0 Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.258736 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516"} Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.266229 4979 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267405 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267469 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267565 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267654 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267818 4979 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267580 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268268 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268289 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268301 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268309 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268327 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268336 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268346 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268354 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerName="pruner" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268375 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerName="pruner" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268389 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268397 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268414 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268421 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268430 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268438 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268567 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268583 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268597 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerName="pruner" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268609 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268622 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268637 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268650 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.277123 4979 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.278703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.284435 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425115 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425402 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425464 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425513 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425542 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425572 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.527885 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528069 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528127 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528165 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528217 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528237 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528257 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528272 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528352 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528389 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528442 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528508 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528499 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528530 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528596 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.878758 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.880632 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.881560 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c" exitCode=2 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.894239 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.897235 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898737 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898811 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898835 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898879 4979 scope.go:117] "RemoveContainer" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.903531 4979 generic.go:334] "Generic (PLEG): container finished" podID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerID="8fde1572bb636a4d23b5e24e14b788b050124ee0ad5e961a05afc8ed632de43e" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.903637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerDied","Data":"8fde1572bb636a4d23b5e24e14b788b050124ee0ad5e961a05afc8ed632de43e"} Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.905203 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.073071 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.621515 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.622656 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.623110 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.623566 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.624272 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.624332 4979 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.624709 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="200ms" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.825793 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="400ms" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.923068 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.924009 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636" exitCode=0 Jan 30 21:44:16 crc kubenswrapper[4979]: E0130 21:44:16.095452 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2tvd8.188fa052ce9d6823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2tvd8,UID:3641ad73-644b-4d71-860b-4d8b7e6a3a6d,APIVersion:v1,ResourceVersion:28136,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 24.835s (24.835s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,LastTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:44:16 crc kubenswrapper[4979]: E0130 21:44:16.227290 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="800ms" Jan 30 21:44:17 crc kubenswrapper[4979]: E0130 21:44:17.028581 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="1.6s" Jan 30 21:44:18 crc kubenswrapper[4979]: E0130 21:44:18.315730 4979 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.316708 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: E0130 21:44:18.629890 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="3.2s" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.670763 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.671454 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.822113 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.822874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.822242 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf5abe88-43e9-47ae-87fc-9163bd1aec5e" (UID: "bf5abe88-43e9-47ae-87fc-9163bd1aec5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823066 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823211 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf5abe88-43e9-47ae-87fc-9163bd1aec5e" (UID: "bf5abe88-43e9-47ae-87fc-9163bd1aec5e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823875 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823964 4979 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.832295 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf5abe88-43e9-47ae-87fc-9163bd1aec5e" (UID: "bf5abe88-43e9-47ae-87fc-9163bd1aec5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.925506 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.946316 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerDied","Data":"730bda6f6ba79a0d724889d0d885e5fa44125a1c153bf8e55571376fa265a6a7"} Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.946374 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.946396 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730bda6f6ba79a0d724889d0d885e5fa44125a1c153bf8e55571376fa265a6a7" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.966110 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:19 crc kubenswrapper[4979]: E0130 21:44:19.643678 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2tvd8.188fa052ce9d6823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2tvd8,UID:3641ad73-644b-4d71-860b-4d8b7e6a3a6d,APIVersion:v1,ResourceVersion:28136,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 24.835s (24.835s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,LastTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.257576 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.261826 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.262682 4979 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.263153 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266121 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266165 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266208 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266667 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266713 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.368051 4979 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.368088 4979 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.368101 4979 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:21 crc kubenswrapper[4979]: E0130 21:44:21.831327 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="6.4s" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.966592 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.967424 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.983502 4979 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.983676 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:23 crc kubenswrapper[4979]: I0130 21:44:23.084433 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 21:44:24 crc kubenswrapper[4979]: I0130 21:44:24.085857 4979 scope.go:117] "RemoveContainer" containerID="b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978" Jan 30 21:44:24 crc kubenswrapper[4979]: E0130 21:44:24.159415 4979 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" volumeName="registry-storage" Jan 30 21:44:24 crc kubenswrapper[4979]: I0130 21:44:24.989704 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.073005 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.571268 4979 scope.go:117] "RemoveContainer" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:44:25 crc kubenswrapper[4979]: E0130 21:44:25.572950 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\": container with ID starting with aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd not found: ID does not exist" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.572989 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd"} err="failed to get container status \"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\": rpc error: code = NotFound desc = could not find container \"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\": container with ID starting with aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd not found: ID does not exist" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.573012 4979 scope.go:117] "RemoveContainer" containerID="af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.722112 4979 scope.go:117] "RemoveContainer" containerID="5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.879081 4979 scope.go:117] "RemoveContainer" containerID="19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.906816 4979 scope.go:117] "RemoveContainer" containerID="3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.943218 4979 scope.go:117] "RemoveContainer" containerID="cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.004073 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5c7960b8aed2f0a8dcc77f54da71a656f159fd58147502658fe9b679616525d5"} Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.013280 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.013486 4979 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42" exitCode=1 Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.013738 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42"} Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.014637 4979 scope.go:117] "RemoveContainer" containerID="019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.014979 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.015505 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.071580 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.073499 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.073905 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.136535 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.136613 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:26 crc kubenswrapper[4979]: E0130 21:44:26.137647 4979 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.138296 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:27 crc kubenswrapper[4979]: I0130 21:44:27.033531 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d22c14f49426106c827dc4b7a7b9fead3e323787335d91190c8e8d4de65efbdb"} Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.140954 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:9bde862635f230b66b73aad05940f6cf2c0555a47fe1db330a20724acca8d497\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:db103f9b4d410efdd30da231ffebe8f093377e6c1e4064ddc68046925eb4627f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1680805611},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:63fbea3b7080a0b403eaf16b3fed3ceda4cbba1fb0d71797d201d97e0745475c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eecad2fc166355255907130f5b4a16ed876f792fe4420ae700dbc3741c3a382e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202122991},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:84bdfaa1280b6132c66ed59de2078e0bd7672cde009357354bf028b9a1673a95\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d9b8bab836aa892d91fb35d5c17765fc6fa4b62c78de50c2a7d885c33cc5415d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187449074},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.141266 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.141729 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.142368 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.142640 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.142666 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:28 crc kubenswrapper[4979]: I0130 21:44:28.045571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerStarted","Data":"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631"} Jan 30 21:44:28 crc kubenswrapper[4979]: I0130 21:44:28.059625 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:28 crc kubenswrapper[4979]: E0130 21:44:28.232076 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="7s" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.052936 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1"} Jan 30 21:44:29 crc kubenswrapper[4979]: E0130 21:44:29.054001 4979 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.054109 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.054607 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.055958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerStarted","Data":"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.056730 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.056910 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.057296 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.059115 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerStarted","Data":"165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.060240 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.060811 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.061262 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.061498 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.061725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerStarted","Data":"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.062524 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.062878 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.063242 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.063486 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.063699 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.065353 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.066449 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f134a187b1223352e6962f82641cf0aa50b285311821a652d66179e0adabda49"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.066567 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.066890 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.067240 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.067480 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.067813 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.081552 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerStarted","Data":"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.081594 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerStarted","Data":"74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083097 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083356 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083783 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083960 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084188 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerStarted","Data":"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084358 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084555 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084704 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084845 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.085140 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.087177 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.087518 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.092181 4979 status_manager.go:851] "Failed to get status for pod" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" pod="openshift-marketplace/redhat-marketplace-qmzzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmzzl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.092627 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.092834 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093003 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093425 4979 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312" exitCode=0 Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093424 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093753 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093955 4979 status_manager.go:851] "Failed to get status for pod" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" pod="openshift-marketplace/redhat-operators-2tvd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2tvd8\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093987 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093964 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094054 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094155 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: E0130 21:44:29.094347 4979 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094422 4979 status_manager.go:851] "Failed to get status for pod" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" pod="openshift-marketplace/redhat-marketplace-qmzzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmzzl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094744 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094974 4979 status_manager.go:851] "Failed to get status for pod" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" pod="openshift-marketplace/redhat-operators-2tvd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2tvd8\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.095284 4979 status_manager.go:851] "Failed to get status for pod" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" pod="openshift-marketplace/redhat-marketplace-qmzzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmzzl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.095554 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.095824 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096069 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096240 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096411 4979 status_manager.go:851] "Failed to get status for pod" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" pod="openshift-marketplace/certified-operators-dk444" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dk444\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096607 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: E0130 21:44:29.646687 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2tvd8.188fa052ce9d6823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2tvd8,UID:3641ad73-644b-4d71-860b-4d8b7e6a3a6d,APIVersion:v1,ResourceVersion:28136,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 24.835s (24.835s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,LastTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:44:30 crc kubenswrapper[4979]: I0130 21:44:30.102619 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1686de9e5686295b98660d6de6c3153fe990ec48d7c5f2b7885125a296d0332"} Jan 30 21:44:30 crc kubenswrapper[4979]: I0130 21:44:30.102682 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"223bfae37478509ca7d8f8d6c295f41d7165a10eb31ad76c77be44273b7b0c0f"} Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.146172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0937ba2cdd38e1742698d325d893e8ab8922542010d8f0b3af5a2e8bcaa307a"} Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.146943 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c80b5290a41693c0e021e97a7edf0c42a4c69ce69da49b5be85f4aa76f7214f5"} Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.807887 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.814137 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157126 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4d9222535a37568ced81697c84e701762a1cf4c9acfd49ba2efb7f1ff81d184"} Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157636 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157672 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157972 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.576401 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.576491 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.630425 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.700627 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.701298 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.756019 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.934520 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.934582 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.979003 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.213470 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.217102 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.219232 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.424956 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.425056 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.468180 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:34 crc kubenswrapper[4979]: I0130 21:44:34.215821 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.155565 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.155657 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.209554 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.259368 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.818387 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.818448 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.877152 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.993143 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.993228 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.062919 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.139055 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.139216 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.139244 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.146882 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.227550 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.229749 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:37 crc kubenswrapper[4979]: I0130 21:44:37.185864 4979 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:37 crc kubenswrapper[4979]: I0130 21:44:37.232465 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://223bfae37478509ca7d8f8d6c295f41d7165a10eb31ad76c77be44273b7b0c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b5290a41693c0e021e97a7edf0c42a4c69ce69da49b5be85f4aa76f7214f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1686de9e5686295b98660d6de6c3153fe990ec48d7c5f2b7885125a296d0332\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d9222535a37568ced81697c84e701762a1cf4c9acfd49ba2efb7f1ff81d184\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0937ba2cdd38e1742698d325d893e8ab8922542010d8f0b3af5a2e8bcaa307a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:30Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"12b694c6-7029-4077-a1d9-ffd9919dd5ee\": field is immutable" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.064326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.075161 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d572690-3742-4c1d-b3e0-d43d6664ef66" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.192297 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.192335 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.196848 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d572690-3742-4c1d-b3e0-d43d6664ef66" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.197184 4979 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://223bfae37478509ca7d8f8d6c295f41d7165a10eb31ad76c77be44273b7b0c0f" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.197201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:39 crc kubenswrapper[4979]: I0130 21:44:39.198971 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:39 crc kubenswrapper[4979]: I0130 21:44:39.199015 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:39 crc kubenswrapper[4979]: I0130 21:44:39.202970 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d572690-3742-4c1d-b3e0-d43d6664ef66" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.211645 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.695156 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.822356 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.896452 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:44:47 crc kubenswrapper[4979]: I0130 21:44:47.439866 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:44:47 crc kubenswrapper[4979]: I0130 21:44:47.812655 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.175209 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.207734 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.302394 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.528422 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.606396 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.824724 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.902423 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.915927 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.953921 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.062145 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.078269 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.093072 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.141179 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.239640 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.406660 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.480236 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.502440 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.522023 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.551189 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.579627 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.591825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.596312 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.605214 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.605262 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.760207 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.776598 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.957701 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.987079 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.059170 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.156906 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.261754 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.263472 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.398955 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.486811 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.506350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.563263 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.613797 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.690861 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.743653 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.802101 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.030257 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.132310 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.209438 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.215005 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.308932 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.368543 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.391458 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.416718 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.449402 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.491664 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.492752 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.536592 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.630279 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.757605 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.813627 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.955608 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.067706 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.085132 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.099867 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.221321 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.222090 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.235911 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.258696 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.326137 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.329682 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.333592 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.422728 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.502406 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.528813 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.535459 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.556276 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.563007 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.564850 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.617871 4979 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.617922 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.618147 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-npfvh" podStartSLOduration=32.724268139 podStartE2EDuration="2m20.618073447s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:35.581345717 +0000 UTC m=+151.542592750" lastFinishedPulling="2026-01-30 21:44:23.475151025 +0000 UTC m=+259.436398058" observedRunningTime="2026-01-30 21:44:36.903460569 +0000 UTC m=+272.864707602" watchObservedRunningTime="2026-01-30 21:44:52.618073447 +0000 UTC m=+288.579320480" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.623173 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-454jj" podStartSLOduration=30.708594006 podStartE2EDuration="2m20.619026383s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:35.661020712 +0000 UTC m=+151.622267755" lastFinishedPulling="2026-01-30 21:44:25.571453069 +0000 UTC m=+261.532700132" observedRunningTime="2026-01-30 21:44:36.919612171 +0000 UTC m=+272.880859204" watchObservedRunningTime="2026-01-30 21:44:52.619026383 +0000 UTC m=+288.580273416" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.624347 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.628460 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.627613 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sg6j7" podStartSLOduration=37.167108538 podStartE2EDuration="2m17.627450513s" podCreationTimestamp="2026-01-30 21:42:35 +0000 UTC" firstStartedPulling="2026-01-30 21:42:39.149181543 +0000 UTC m=+155.110428576" lastFinishedPulling="2026-01-30 21:44:19.609523528 +0000 UTC m=+255.570770551" observedRunningTime="2026-01-30 21:44:36.871554769 +0000 UTC m=+272.832801802" watchObservedRunningTime="2026-01-30 21:44:52.627450513 +0000 UTC m=+288.588697576" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.630237 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmzzl" podStartSLOduration=31.464259558 podStartE2EDuration="2m18.630228499s" podCreationTimestamp="2026-01-30 21:42:34 +0000 UTC" firstStartedPulling="2026-01-30 21:42:37.796412455 +0000 UTC m=+153.757659488" lastFinishedPulling="2026-01-30 21:44:24.962381396 +0000 UTC m=+260.923628429" observedRunningTime="2026-01-30 21:44:36.956162919 +0000 UTC m=+272.917409972" watchObservedRunningTime="2026-01-30 21:44:52.630228499 +0000 UTC m=+288.591475532" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.630744 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tvd8" podStartSLOduration=40.453309381 podStartE2EDuration="2m17.630703712s" podCreationTimestamp="2026-01-30 21:42:35 +0000 UTC" firstStartedPulling="2026-01-30 21:42:38.91666425 +0000 UTC m=+154.877911283" lastFinishedPulling="2026-01-30 21:44:16.094058581 +0000 UTC m=+252.055305614" observedRunningTime="2026-01-30 21:44:36.935189346 +0000 UTC m=+272.896436379" watchObservedRunningTime="2026-01-30 21:44:52.630703712 +0000 UTC m=+288.591950745" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.636241 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-krrkl" podStartSLOduration=32.766471652999996 podStartE2EDuration="2m20.636225183s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:34.426255288 +0000 UTC m=+150.387502321" lastFinishedPulling="2026-01-30 21:44:22.296008808 +0000 UTC m=+258.257255851" observedRunningTime="2026-01-30 21:44:36.855243923 +0000 UTC m=+272.816490966" watchObservedRunningTime="2026-01-30 21:44:52.636225183 +0000 UTC m=+288.597472216" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.637005 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk444" podStartSLOduration=32.133758108 podStartE2EDuration="2m20.636991814s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:35.581770459 +0000 UTC m=+151.543017492" lastFinishedPulling="2026-01-30 21:44:24.085004155 +0000 UTC m=+260.046251198" observedRunningTime="2026-01-30 21:44:36.888050229 +0000 UTC m=+272.849297262" watchObservedRunningTime="2026-01-30 21:44:52.636991814 +0000 UTC m=+288.598238857" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.643940 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.644738 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.644936 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.646056 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.646085 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.716592 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.738711 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.797403 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.888415 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.013479 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.160204 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.172057 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.232874 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.279661 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.297897 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.345707 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.345683168 podStartE2EDuration="16.345683168s" podCreationTimestamp="2026-01-30 21:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:44:53.345680638 +0000 UTC m=+289.306927671" watchObservedRunningTime="2026-01-30 21:44:53.345683168 +0000 UTC m=+289.306930201" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.360922 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=17.360900983 podStartE2EDuration="17.360900983s" podCreationTimestamp="2026-01-30 21:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:44:53.359906306 +0000 UTC m=+289.321153339" watchObservedRunningTime="2026-01-30 21:44:53.360900983 +0000 UTC m=+289.322148016" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.405514 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.505268 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.552793 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.695744 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.756288 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.763415 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.772349 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.801810 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.814263 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.857512 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.002219 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.015287 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.051290 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.134697 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.167456 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.179076 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.212837 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.259910 4979 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.264160 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.273267 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.298790 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.555886 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.602908 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.612141 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.672011 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.751685 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.859192 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.956438 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.090003 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.111965 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.151625 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.241771 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.263899 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.272566 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.363606 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.382827 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.384406 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.691854 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.703664 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.720628 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.820859 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.911523 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.939608 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.994784 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.045182 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.053554 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.101954 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.115980 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.133755 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.149621 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.255591 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.324921 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.356061 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.380434 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.381976 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.416627 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.452885 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.623246 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.643098 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.644294 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.735071 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.757650 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.790562 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.823415 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.841791 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.893337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.016404 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.065797 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.080184 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.081311 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.090575 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.205946 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.279212 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.385479 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.437457 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.548224 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.568499 4979 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.718411 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.737500 4979 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.737723 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.806690 4979 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.976150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.013368 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.202478 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.217541 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.312724 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.316677 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.322414 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.329283 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.386785 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.501239 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.563396 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.592988 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.710946 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.823130 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.832088 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.856149 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.925718 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.937235 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.960356 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.077341 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.080205 4979 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.080509 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1" gracePeriod=5 Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.104757 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.108450 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.126017 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.141309 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.182898 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.254067 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.257380 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.259013 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.297821 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.677597 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.684357 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.686706 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.844460 4979 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.855350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.009919 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.019462 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.024769 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.088433 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.097362 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.118261 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184142 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 21:45:00 crc kubenswrapper[4979]: E0130 21:45:00.184420 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerName="installer" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184435 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerName="installer" Jan 30 21:45:00 crc kubenswrapper[4979]: E0130 21:45:00.184469 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184478 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184594 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184603 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerName="installer" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.185127 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.191691 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.192006 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.195945 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.293596 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.324563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.324944 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.325073 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.364350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.426322 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.426399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.426504 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.427745 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.446421 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.457455 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.500056 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.505083 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.602623 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.619063 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.653627 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.653646 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.707402 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.894829 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.927098 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.985914 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.991140 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.000523 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.103612 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.230855 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.236861 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.343023 4979 generic.go:334] "Generic (PLEG): container finished" podID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerID="0ffeefd62cefc7a667955d4354abe400003540bade5b7a6dadf2ad36b308e029" exitCode=0 Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.343093 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" event={"ID":"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa","Type":"ContainerDied","Data":"0ffeefd62cefc7a667955d4354abe400003540bade5b7a6dadf2ad36b308e029"} Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.343123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" event={"ID":"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa","Type":"ContainerStarted","Data":"e67f6eed31b14319537901f85e6944a16de2613d83fcff0ea3359270388b5241"} Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.368355 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.447020 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.511629 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.557567 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.720145 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.729975 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.801397 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.909907 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.121498 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.271360 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.613128 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.757720 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.758133 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.758224 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.759325 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" (UID: "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.763082 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5" (OuterVolumeSpecName: "kube-api-access-db4r5") pod "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" (UID: "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa"). InnerVolumeSpecName "kube-api-access-db4r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.763425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" (UID: "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.860006 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.860063 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.860081 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.109557 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.355949 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" event={"ID":"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa","Type":"ContainerDied","Data":"e67f6eed31b14319537901f85e6944a16de2613d83fcff0ea3359270388b5241"} Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.356008 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e67f6eed31b14319537901f85e6944a16de2613d83fcff0ea3359270388b5241" Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.356023 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.363918 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.363996 4979 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1" exitCode=137 Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.659721 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.659879 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.787967 4979 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789385 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789565 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789605 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789643 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789692 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789811 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789889 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789946 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789990 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790343 4979 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790370 4979 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790381 4979 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790393 4979 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.795762 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.891314 4979 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.085611 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.085849 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.097406 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.097693 4979 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6222e4b9-7642-4957-b2f8-2b18a8a64b75" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.100698 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.100722 4979 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6222e4b9-7642-4957-b2f8-2b18a8a64b75" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.372345 4979 scope.go:117] "RemoveContainer" containerID="9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.372399 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:45:18 crc kubenswrapper[4979]: I0130 21:45:18.004381 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:45:19 crc kubenswrapper[4979]: I0130 21:45:19.091879 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:45:20 crc kubenswrapper[4979]: I0130 21:45:20.246205 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:45:20 crc kubenswrapper[4979]: I0130 21:45:20.441431 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:45:22 crc kubenswrapper[4979]: I0130 21:45:22.019446 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:45:24 crc kubenswrapper[4979]: I0130 21:45:24.202827 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:45:24 crc kubenswrapper[4979]: I0130 21:45:24.587010 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.867980 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.868300 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" containerID="cri-o://31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" gracePeriod=30 Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.894712 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.894989 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" containerID="cri-o://c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" gracePeriod=30 Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.315925 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.322166 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420448 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420518 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420548 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420587 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420649 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420677 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420716 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.421491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config" (OuterVolumeSpecName: "config") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.421701 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.421911 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config" (OuterVolumeSpecName: "config") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.422555 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca" (OuterVolumeSpecName: "client-ca") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.422702 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.427626 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt" (OuterVolumeSpecName: "kube-api-access-xwtgt") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "kube-api-access-xwtgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.428083 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.428183 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb" (OuterVolumeSpecName: "kube-api-access-rgwbb") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "kube-api-access-rgwbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.428272 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522871 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522929 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522947 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522964 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522982 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522997 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.523014 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.523057 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.523074 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525152 4979 generic.go:334] "Generic (PLEG): container finished" podID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" exitCode=0 Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525210 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerDied","Data":"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525281 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerDied","Data":"18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525302 4979 scope.go:117] "RemoveContainer" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530522 4979 generic.go:334] "Generic (PLEG): container finished" podID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" exitCode=0 Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530585 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerDied","Data":"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530731 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerDied","Data":"f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.545806 4979 scope.go:117] "RemoveContainer" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" Jan 30 21:45:26 crc kubenswrapper[4979]: E0130 21:45:26.546302 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb\": container with ID starting with 31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb not found: ID does not exist" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.546363 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb"} err="failed to get container status \"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb\": rpc error: code = NotFound desc = could not find container \"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb\": container with ID starting with 31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb not found: ID does not exist" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.546397 4979 scope.go:117] "RemoveContainer" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.557721 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.561958 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.566638 4979 scope.go:117] "RemoveContainer" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" Jan 30 21:45:26 crc kubenswrapper[4979]: E0130 21:45:26.567200 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3\": container with ID starting with c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3 not found: ID does not exist" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.567248 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3"} err="failed to get container status \"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3\": rpc error: code = NotFound desc = could not find container \"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3\": container with ID starting with c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3 not found: ID does not exist" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.574143 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.577564 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.083527 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" path="/var/lib/kubelet/pods/c138f389-e49e-4c26-b2ee-af169b1c8343/volumes" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.084887 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" path="/var/lib/kubelet/pods/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b/volumes" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.818795 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:27 crc kubenswrapper[4979]: E0130 21:45:27.819259 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerName="collect-profiles" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819281 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerName="collect-profiles" Jan 30 21:45:27 crc kubenswrapper[4979]: E0130 21:45:27.819311 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819324 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: E0130 21:45:27.819345 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819358 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819548 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerName="collect-profiles" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819571 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819595 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.820275 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.828698 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.830143 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.838891 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.838926 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.839593 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840028 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840469 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840693 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840550 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840766 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840768 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840769 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840928 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.841118 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847373 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847676 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847834 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.848763 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.850617 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.863135 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948686 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948777 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948843 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948884 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948907 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948935 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948955 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.949861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.950591 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.950675 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.954293 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.980613 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.050663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.050734 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.050790 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.051485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.052096 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.052594 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.054134 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.073891 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.164535 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.187893 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.609497 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.660715 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:28 crc kubenswrapper[4979]: W0130 21:45:28.664653 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e66afa2_6e33_4cc2_8b31_65987d8cd10b.slice/crio-e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115 WatchSource:0}: Error finding container e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115: Status 404 returned error can't find the container with id e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115 Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.702367 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.348992 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.349539 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-454jj" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" containerID="cri-o://74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2" gracePeriod=2 Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.552676 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.553355 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-npfvh" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" containerID="cri-o://8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" gracePeriod=2 Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.596259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerStarted","Data":"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.597484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerStarted","Data":"7c14a9171f3c938df9defa43623be8174b1cfbdfa890eb7b412765a04a3f397c"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.597633 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.601791 4979 generic.go:334] "Generic (PLEG): container finished" podID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerID="74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2" exitCode=0 Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.601942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604143 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerStarted","Data":"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerStarted","Data":"e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604899 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.617715 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.623809 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" podStartSLOduration=4.623778415 podStartE2EDuration="4.623778415s" podCreationTimestamp="2026-01-30 21:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:29.617848842 +0000 UTC m=+325.579095865" watchObservedRunningTime="2026-01-30 21:45:29.623778415 +0000 UTC m=+325.585025468" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.651941 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" podStartSLOduration=4.651915828 podStartE2EDuration="4.651915828s" podCreationTimestamp="2026-01-30 21:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:29.637729808 +0000 UTC m=+325.598976861" watchObservedRunningTime="2026-01-30 21:45:29.651915828 +0000 UTC m=+325.613162861" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.756829 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.889802 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"82df7d39-6821-4916-b8c9-534688ca3d5e\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.889892 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"82df7d39-6821-4916-b8c9-534688ca3d5e\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.889967 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"82df7d39-6821-4916-b8c9-534688ca3d5e\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.891121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities" (OuterVolumeSpecName: "utilities") pod "82df7d39-6821-4916-b8c9-534688ca3d5e" (UID: "82df7d39-6821-4916-b8c9-534688ca3d5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.900212 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf" (OuterVolumeSpecName: "kube-api-access-7hzvf") pod "82df7d39-6821-4916-b8c9-534688ca3d5e" (UID: "82df7d39-6821-4916-b8c9-534688ca3d5e"). InnerVolumeSpecName "kube-api-access-7hzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.932814 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.944850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82df7d39-6821-4916-b8c9-534688ca3d5e" (UID: "82df7d39-6821-4916-b8c9-534688ca3d5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.991738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"568a44ae-c892-48a7-b4c0-2d83606e7b95\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.991803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"568a44ae-c892-48a7-b4c0-2d83606e7b95\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.991896 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"568a44ae-c892-48a7-b4c0-2d83606e7b95\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.992257 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.992278 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.992295 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.993217 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities" (OuterVolumeSpecName: "utilities") pod "568a44ae-c892-48a7-b4c0-2d83606e7b95" (UID: "568a44ae-c892-48a7-b4c0-2d83606e7b95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.996444 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj" (OuterVolumeSpecName: "kube-api-access-kqdhj") pod "568a44ae-c892-48a7-b4c0-2d83606e7b95" (UID: "568a44ae-c892-48a7-b4c0-2d83606e7b95"). InnerVolumeSpecName "kube-api-access-kqdhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.033588 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "568a44ae-c892-48a7-b4c0-2d83606e7b95" (UID: "568a44ae-c892-48a7-b4c0-2d83606e7b95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.093764 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.093799 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.093809 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614172 4979 generic.go:334] "Generic (PLEG): container finished" podID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" exitCode=0 Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614266 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614271 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7"} Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614655 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"9e701107804895c162dc5dbfb55c5fb4850bb1995cf07bbee85bb8f8a3ce5a6f"} Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614681 4979 scope.go:117] "RemoveContainer" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.618155 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"897e930b920945770fe85e65189da3f41f538afe25ecb7f6857d9256eed7d54a"} Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.618286 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.637328 4979 scope.go:117] "RemoveContainer" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.664237 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.666304 4979 scope.go:117] "RemoveContainer" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.666133 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.689793 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.693935 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.697370 4979 scope.go:117] "RemoveContainer" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" Jan 30 21:45:30 crc kubenswrapper[4979]: E0130 21:45:30.697792 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7\": container with ID starting with 8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7 not found: ID does not exist" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.697829 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7"} err="failed to get container status \"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7\": rpc error: code = NotFound desc = could not find container \"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7\": container with ID starting with 8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7 not found: ID does not exist" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.697859 4979 scope.go:117] "RemoveContainer" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" Jan 30 21:45:30 crc kubenswrapper[4979]: E0130 21:45:30.698135 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874\": container with ID starting with d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874 not found: ID does not exist" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698157 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874"} err="failed to get container status \"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874\": rpc error: code = NotFound desc = could not find container \"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874\": container with ID starting with d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874 not found: ID does not exist" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698174 4979 scope.go:117] "RemoveContainer" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" Jan 30 21:45:30 crc kubenswrapper[4979]: E0130 21:45:30.698551 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab\": container with ID starting with 82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab not found: ID does not exist" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698588 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab"} err="failed to get container status \"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab\": rpc error: code = NotFound desc = could not find container \"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab\": container with ID starting with 82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab not found: ID does not exist" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698609 4979 scope.go:117] "RemoveContainer" containerID="74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.716462 4979 scope.go:117] "RemoveContainer" containerID="8ce38f5c2d102434af1616c327c364faa35dac4f176a6f600fbf112072871235" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.739665 4979 scope.go:117] "RemoveContainer" containerID="bf235c47905ef6c38fcc7f3601d64c6f0ba215a6796ab2b1da97239f211b40de" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.757438 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.078447 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" path="/var/lib/kubelet/pods/568a44ae-c892-48a7-b4c0-2d83606e7b95/volumes" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.079094 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" path="/var/lib/kubelet/pods/82df7d39-6821-4916-b8c9-534688ca3d5e/volumes" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.603446 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.756227 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.756806 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmzzl" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" containerID="cri-o://17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" gracePeriod=2 Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.960556 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.960829 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sg6j7" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" containerID="cri-o://aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" gracePeriod=2 Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.166861 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.224453 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.224571 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.224619 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.227581 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities" (OuterVolumeSpecName: "utilities") pod "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" (UID: "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.233952 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw" (OuterVolumeSpecName: "kube-api-access-gqrjw") pod "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" (UID: "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5"). InnerVolumeSpecName "kube-api-access-gqrjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.246397 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" (UID: "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.317707 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.326129 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.326161 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.326173 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.432205 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"444df6ed-3c43-4310-adc6-69ab0a9ea702\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.432348 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"444df6ed-3c43-4310-adc6-69ab0a9ea702\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.432375 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"444df6ed-3c43-4310-adc6-69ab0a9ea702\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.433013 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities" (OuterVolumeSpecName: "utilities") pod "444df6ed-3c43-4310-adc6-69ab0a9ea702" (UID: "444df6ed-3c43-4310-adc6-69ab0a9ea702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.438340 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc" (OuterVolumeSpecName: "kube-api-access-gc7xc") pod "444df6ed-3c43-4310-adc6-69ab0a9ea702" (UID: "444df6ed-3c43-4310-adc6-69ab0a9ea702"). InnerVolumeSpecName "kube-api-access-gc7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.534246 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.534294 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.596277 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444df6ed-3c43-4310-adc6-69ab0a9ea702" (UID: "444df6ed-3c43-4310-adc6-69ab0a9ea702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.638160 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644486 4979 generic.go:334] "Generic (PLEG): container finished" podID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" exitCode=0 Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644552 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"4c62920e03a89d4d5765a230e2b55c002afe184d080ace3bcaa5b06f8f97c1f4"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644601 4979 scope.go:117] "RemoveContainer" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644700 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654479 4979 generic.go:334] "Generic (PLEG): container finished" podID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" exitCode=0 Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654561 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"87982f21eeaee850aff8e29886551952617d82411b159837b48e46f7e706dfb9"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654644 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.664819 4979 scope.go:117] "RemoveContainer" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.683610 4979 scope.go:117] "RemoveContainer" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.697200 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.707313 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.711229 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.714336 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.718853 4979 scope.go:117] "RemoveContainer" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.719337 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d\": container with ID starting with 17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d not found: ID does not exist" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719391 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d"} err="failed to get container status \"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d\": rpc error: code = NotFound desc = could not find container \"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d\": container with ID starting with 17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719432 4979 scope.go:117] "RemoveContainer" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.719707 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16\": container with ID starting with 20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16 not found: ID does not exist" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719730 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16"} err="failed to get container status \"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16\": rpc error: code = NotFound desc = could not find container \"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16\": container with ID starting with 20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719747 4979 scope.go:117] "RemoveContainer" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.721834 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065\": container with ID starting with 3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065 not found: ID does not exist" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.721858 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065"} err="failed to get container status \"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065\": rpc error: code = NotFound desc = could not find container \"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065\": container with ID starting with 3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.721872 4979 scope.go:117] "RemoveContainer" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.738268 4979 scope.go:117] "RemoveContainer" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.755609 4979 scope.go:117] "RemoveContainer" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.772279 4979 scope.go:117] "RemoveContainer" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.772998 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1\": container with ID starting with aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1 not found: ID does not exist" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773115 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1"} err="failed to get container status \"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1\": rpc error: code = NotFound desc = could not find container \"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1\": container with ID starting with aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773148 4979 scope.go:117] "RemoveContainer" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.773470 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516\": container with ID starting with bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516 not found: ID does not exist" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773486 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516"} err="failed to get container status \"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516\": rpc error: code = NotFound desc = could not find container \"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516\": container with ID starting with bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773502 4979 scope.go:117] "RemoveContainer" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.774282 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d\": container with ID starting with 7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d not found: ID does not exist" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.774337 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d"} err="failed to get container status \"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d\": rpc error: code = NotFound desc = could not find container \"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d\": container with ID starting with 7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d not found: ID does not exist" Jan 30 21:45:33 crc kubenswrapper[4979]: I0130 21:45:33.076833 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" path="/var/lib/kubelet/pods/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5/volumes" Jan 30 21:45:33 crc kubenswrapper[4979]: I0130 21:45:33.077942 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" path="/var/lib/kubelet/pods/444df6ed-3c43-4310-adc6-69ab0a9ea702/volumes" Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.190337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.588212 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.588512 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" containerID="cri-o://46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" gracePeriod=30 Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.609788 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.610105 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" containerID="cri-o://e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" gracePeriod=30 Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.195663 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.270675 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303284 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303372 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303444 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303470 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303512 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303545 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303572 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304611 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304605 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304638 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config" (OuterVolumeSpecName: "config") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304959 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca" (OuterVolumeSpecName: "client-ca") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.305125 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config" (OuterVolumeSpecName: "config") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318342 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318352 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl" (OuterVolumeSpecName: "kube-api-access-j72kl") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "kube-api-access-j72kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6" (OuterVolumeSpecName: "kube-api-access-qzkz6") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "kube-api-access-qzkz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.404814 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405147 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405214 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405271 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405393 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405531 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405596 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405670 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405730 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.496999 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687594 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" exitCode=0 Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687653 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerDied","Data":"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerDied","Data":"e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687752 4979 scope.go:117] "RemoveContainer" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.688094 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689893 4979 generic.go:334] "Generic (PLEG): container finished" podID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" exitCode=0 Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689949 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerDied","Data":"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689966 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689989 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerDied","Data":"7c14a9171f3c938df9defa43623be8174b1cfbdfa890eb7b412765a04a3f397c"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.708568 4979 scope.go:117] "RemoveContainer" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.709367 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7\": container with ID starting with e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7 not found: ID does not exist" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.709478 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7"} err="failed to get container status \"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7\": rpc error: code = NotFound desc = could not find container \"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7\": container with ID starting with e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7 not found: ID does not exist" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.709567 4979 scope.go:117] "RemoveContainer" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.726138 4979 scope.go:117] "RemoveContainer" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.729578 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b\": container with ID starting with 46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b not found: ID does not exist" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.729648 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b"} err="failed to get container status \"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b\": rpc error: code = NotFound desc = could not find container \"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b\": container with ID starting with 46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b not found: ID does not exist" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.732143 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.743238 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.758690 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.763725 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827301 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827678 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827699 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827723 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827733 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827745 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827751 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827763 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827769 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827783 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827790 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827796 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827802 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827811 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827817 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827823 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827828 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827836 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827843 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827850 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827857 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827868 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827880 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827892 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827899 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827909 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827916 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827926 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827934 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828068 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828077 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828089 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828101 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828108 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828120 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828674 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.832398 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d689d4657-lxzrk"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.833086 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.833864 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834213 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834271 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834438 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834781 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.864692 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.866512 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.867701 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.867729 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.867953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.868097 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.868275 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.874272 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.883669 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.886874 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d689d4657-lxzrk"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.912970 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913059 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-serving-cert\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-config\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913202 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913225 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-client-ca\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913251 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913302 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9t4h\" (UniqueName: \"kubernetes.io/projected/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-kube-api-access-m9t4h\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-proxy-ca-bundles\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015250 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-config\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015310 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-client-ca\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015340 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9t4h\" (UniqueName: \"kubernetes.io/projected/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-kube-api-access-m9t4h\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-proxy-ca-bundles\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-serving-cert\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016539 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016675 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-proxy-ca-bundles\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016704 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016735 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-client-ca\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016941 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-config\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.023170 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.035440 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9t4h\" (UniqueName: \"kubernetes.io/projected/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-kube-api-access-m9t4h\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.036779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-serving-cert\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.041298 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.186660 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.192298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.551753 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.634295 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d689d4657-lxzrk"] Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.700853 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" event={"ID":"81acdeb8-ebe9-40a3-b25c-cb98a9070c15","Type":"ContainerStarted","Data":"53118ab5654366884dbd5fc58ab254b9c3fc7947646ab665058039a4a290a7c7"} Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.703995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerStarted","Data":"0597e34480d8b61092d333b15257aceea525575d0d6fd9cd29f2039d26375964"} Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.076980 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" path="/var/lib/kubelet/pods/0e66afa2-6e33-4cc2-8b31-65987d8cd10b/volumes" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.077936 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" path="/var/lib/kubelet/pods/a46553cf-e4b5-4d15-b590-9c6e06819ab5/volumes" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.713113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerStarted","Data":"4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a"} Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.715851 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.715979 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" event={"ID":"81acdeb8-ebe9-40a3-b25c-cb98a9070c15","Type":"ContainerStarted","Data":"a8416ec064734079a5625a41bc2f84b4e946d8353cda9890c83a696cd83ff47c"} Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.716509 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.720781 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.723986 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.736640 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" podStartSLOduration=3.7365810919999998 podStartE2EDuration="3.736581092s" podCreationTimestamp="2026-01-30 21:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:39.734885865 +0000 UTC m=+335.696132898" watchObservedRunningTime="2026-01-30 21:45:39.736581092 +0000 UTC m=+335.697828125" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.779318 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" podStartSLOduration=3.779291685 podStartE2EDuration="3.779291685s" podCreationTimestamp="2026-01-30 21:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:39.777934528 +0000 UTC m=+335.739181571" watchObservedRunningTime="2026-01-30 21:45:39.779291685 +0000 UTC m=+335.740538718" Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.584745 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.585710 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" containerID="cri-o://4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a" gracePeriod=30 Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.829629 4979 generic.go:334] "Generic (PLEG): container finished" podID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerID="4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a" exitCode=0 Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.829688 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerDied","Data":"4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a"} Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.033157 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097864 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097930 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.098958 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.098978 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config" (OuterVolumeSpecName: "config") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.103940 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6" (OuterVolumeSpecName: "kube-api-access-42vm6") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "kube-api-access-42vm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.104323 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200010 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200081 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200096 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200106 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.837019 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerDied","Data":"0597e34480d8b61092d333b15257aceea525575d0d6fd9cd29f2039d26375964"} Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.837098 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.837116 4979 scope.go:117] "RemoveContainer" containerID="4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.845690 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:45:57 crc kubenswrapper[4979]: E0130 21:45:57.846250 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.846287 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.846725 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.847725 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.851622 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.851995 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853190 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853385 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853555 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853562 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.857980 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.888000 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.892193 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.909934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.909976 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.910066 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.910181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011393 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011408 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.012876 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.013719 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.017112 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.037908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.174381 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.613981 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:45:58 crc kubenswrapper[4979]: W0130 21:45:58.617598 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e8cfb9_394f_4387_9a89_95c9cc094c81.slice/crio-ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab WatchSource:0}: Error finding container ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab: Status 404 returned error can't find the container with id ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.845160 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerStarted","Data":"27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e"} Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.845472 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerStarted","Data":"ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab"} Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.845487 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.846627 4979 patch_prober.go:28] interesting pod/route-controller-manager-75f959899b-bxkf4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.846674 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.868919 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" podStartSLOduration=2.868894358 podStartE2EDuration="2.868894358s" podCreationTimestamp="2026-01-30 21:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:58.867612183 +0000 UTC m=+354.828859236" watchObservedRunningTime="2026-01-30 21:45:58.868894358 +0000 UTC m=+354.830141391" Jan 30 21:45:59 crc kubenswrapper[4979]: I0130 21:45:59.077865 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" path="/var/lib/kubelet/pods/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe/volumes" Jan 30 21:45:59 crc kubenswrapper[4979]: I0130 21:45:59.856837 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:46:02 crc kubenswrapper[4979]: I0130 21:46:02.039911 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:02 crc kubenswrapper[4979]: I0130 21:46:02.040275 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:14 crc kubenswrapper[4979]: I0130 21:46:14.812850 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.617281 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.618079 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" containerID="cri-o://27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e" gracePeriod=30 Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.945893 4979 generic.go:334] "Generic (PLEG): container finished" podID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerID="27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e" exitCode=0 Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.945944 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerDied","Data":"27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e"} Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.055312 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087649 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087836 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087912 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087976 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.088705 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config" (OuterVolumeSpecName: "config") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.088749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.095120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd" (OuterVolumeSpecName: "kube-api-access-6kvkd") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "kube-api-access-6kvkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.097994 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188696 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188726 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188736 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188745 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.863821 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl"] Jan 30 21:46:17 crc kubenswrapper[4979]: E0130 21:46:17.864873 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.864970 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.865250 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.865854 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.888783 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl"] Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897650 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc4mj\" (UniqueName: \"kubernetes.io/projected/9d8cb00a-591e-48f0-8da1-4157327277c5-kube-api-access-vc4mj\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8cb00a-591e-48f0-8da1-4157327277c5-serving-cert\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897777 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-client-ca\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897800 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-config\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.952553 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerDied","Data":"ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab"} Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.952614 4979 scope.go:117] "RemoveContainer" containerID="27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.952665 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.981998 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.985115 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998757 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc4mj\" (UniqueName: \"kubernetes.io/projected/9d8cb00a-591e-48f0-8da1-4157327277c5-kube-api-access-vc4mj\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8cb00a-591e-48f0-8da1-4157327277c5-serving-cert\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998856 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-client-ca\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998875 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-config\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.000533 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-client-ca\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.000549 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-config\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.006490 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8cb00a-591e-48f0-8da1-4157327277c5-serving-cert\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.028513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc4mj\" (UniqueName: \"kubernetes.io/projected/9d8cb00a-591e-48f0-8da1-4157327277c5-kube-api-access-vc4mj\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.195073 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.687180 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl"] Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.959435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" event={"ID":"9d8cb00a-591e-48f0-8da1-4157327277c5","Type":"ContainerStarted","Data":"4c8cb1586ac43df894a84c19b4a0d0c262d63486143a16bfa9f843d91b65ae75"} Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.959495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" event={"ID":"9d8cb00a-591e-48f0-8da1-4157327277c5","Type":"ContainerStarted","Data":"471896edc903d9fb464e2321c8c835f68325e7578978f0d7e9a3d4c84909a07f"} Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.959814 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.977481 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" podStartSLOduration=2.977458458 podStartE2EDuration="2.977458458s" podCreationTimestamp="2026-01-30 21:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:18.973824427 +0000 UTC m=+374.935071460" watchObservedRunningTime="2026-01-30 21:46:18.977458458 +0000 UTC m=+374.938705491" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.077998 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" path="/var/lib/kubelet/pods/e6e8cfb9-394f-4387-9a89-95c9cc094c81/volumes" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.130309 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.737339 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxkg"] Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.738499 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.757478 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxkg"] Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825436 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-trusted-ca\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825497 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2bd\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-kube-api-access-kt2bd\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825553 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-certificates\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825574 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2d8b6b0a-955d-42cb-a277-3018daf971ad-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2d8b6b0a-955d-42cb-a277-3018daf971ad-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825619 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825637 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-tls\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825656 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.847284 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927233 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-trusted-ca\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt2bd\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-kube-api-access-kt2bd\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927347 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-certificates\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927370 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2d8b6b0a-955d-42cb-a277-3018daf971ad-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2d8b6b0a-955d-42cb-a277-3018daf971ad-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927412 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-tls\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.928477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2d8b6b0a-955d-42cb-a277-3018daf971ad-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.929098 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-certificates\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.929714 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-trusted-ca\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.942855 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-tls\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.942892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2d8b6b0a-955d-42cb-a277-3018daf971ad-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.946353 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.947494 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt2bd\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-kube-api-access-kt2bd\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.107517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.520946 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxkg"] Jan 30 21:46:20 crc kubenswrapper[4979]: W0130 21:46:20.526416 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8b6b0a_955d_42cb_a277_3018daf971ad.slice/crio-3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f WatchSource:0}: Error finding container 3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f: Status 404 returned error can't find the container with id 3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.976111 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" event={"ID":"2d8b6b0a-955d-42cb-a277-3018daf971ad","Type":"ContainerStarted","Data":"1b4c1c9f355a5b85021efd1353e93fe5a460913ea006314018aa6be4bea6033f"} Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.976215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" event={"ID":"2d8b6b0a-955d-42cb-a277-3018daf971ad","Type":"ContainerStarted","Data":"3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f"} Jan 30 21:46:21 crc kubenswrapper[4979]: I0130 21:46:21.984295 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:32 crc kubenswrapper[4979]: I0130 21:46:32.040379 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:32 crc kubenswrapper[4979]: I0130 21:46:32.041118 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:39 crc kubenswrapper[4979]: I0130 21:46:39.853735 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" containerID="cri-o://81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d" gracePeriod=15 Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.107145 4979 generic.go:334] "Generic (PLEG): container finished" podID="de06742d-2533-4510-abec-ff0f35d84a45" containerID="81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d" exitCode=0 Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.107276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerDied","Data":"81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d"} Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.113142 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.134100 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" podStartSLOduration=21.134077713 podStartE2EDuration="21.134077713s" podCreationTimestamp="2026-01-30 21:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:20.994124512 +0000 UTC m=+376.955371565" watchObservedRunningTime="2026-01-30 21:46:40.134077713 +0000 UTC m=+396.095324746" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.169204 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.311596 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.354676 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5586db8965-x5tfp"] Jan 30 21:46:40 crc kubenswrapper[4979]: E0130 21:46:40.355009 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.355070 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.355174 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.355716 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.358726 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5586db8965-x5tfp"] Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.462468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.462530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.462643 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463409 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463765 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463790 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463827 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463892 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464409 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464488 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464539 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464569 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465178 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465186 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465219 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465294 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465665 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-login\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/660a3a75-e96c-432c-80b8-aea9a9382317-audit-dir\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465972 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-session\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp6p\" (UniqueName: \"kubernetes.io/projected/660a3a75-e96c-432c-80b8-aea9a9382317-kube-api-access-fwp6p\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466194 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466249 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-error\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466273 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-audit-policies\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466303 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466326 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-service-ca\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466453 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466487 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-router-certs\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466552 4979 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466569 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466585 4979 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466598 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466611 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.469445 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff" (OuterVolumeSpecName: "kube-api-access-sztff") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "kube-api-access-sztff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.469850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.470189 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471154 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471980 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.472200 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.472637 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.567889 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/660a3a75-e96c-432c-80b8-aea9a9382317-audit-dir\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.567992 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568045 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-session\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp6p\" (UniqueName: \"kubernetes.io/projected/660a3a75-e96c-432c-80b8-aea9a9382317-kube-api-access-fwp6p\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568110 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/660a3a75-e96c-432c-80b8-aea9a9382317-audit-dir\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568142 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-audit-policies\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568364 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-error\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568425 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568457 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-service-ca\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568490 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568622 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568662 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-router-certs\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568711 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-login\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568839 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568860 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568886 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568905 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568924 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568944 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568963 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568986 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569009 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569173 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-audit-policies\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569523 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569559 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.570104 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-service-ca\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.571879 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-session\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.572542 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-router-certs\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.573731 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.574459 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.575650 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.575974 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.582168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-login\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.584903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-error\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.602988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp6p\" (UniqueName: \"kubernetes.io/projected/660a3a75-e96c-432c-80b8-aea9a9382317-kube-api-access-fwp6p\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.670955 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.095081 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5586db8965-x5tfp"] Jan 30 21:46:41 crc kubenswrapper[4979]: W0130 21:46:41.109310 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod660a3a75_e96c_432c_80b8_aea9a9382317.slice/crio-2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1 WatchSource:0}: Error finding container 2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1: Status 404 returned error can't find the container with id 2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1 Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.116519 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerDied","Data":"246d40c550fcc6c9fdc34ebbfdb6355e89a001f7901886dab00180fbdbb32fa5"} Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.116604 4979 scope.go:117] "RemoveContainer" containerID="81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d" Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.116666 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.119653 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" event={"ID":"660a3a75-e96c-432c-80b8-aea9a9382317","Type":"ContainerStarted","Data":"2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1"} Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.172978 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.178585 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.131806 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" event={"ID":"660a3a75-e96c-432c-80b8-aea9a9382317","Type":"ContainerStarted","Data":"77d29596ccc62748fe997b8be8cfbb321d441df710b36231756b3dfa47b1500c"} Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.132491 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.140347 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.163108 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" podStartSLOduration=28.163080305 podStartE2EDuration="28.163080305s" podCreationTimestamp="2026-01-30 21:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:42.159292574 +0000 UTC m=+398.120539637" watchObservedRunningTime="2026-01-30 21:46:42.163080305 +0000 UTC m=+398.124327368" Jan 30 21:46:43 crc kubenswrapper[4979]: I0130 21:46:43.079549 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de06742d-2533-4510-abec-ff0f35d84a45" path="/var/lib/kubelet/pods/de06742d-2533-4510-abec-ff0f35d84a45/volumes" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.185568 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.186507 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk444" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" containerID="cri-o://951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.191656 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.191925 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-krrkl" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" containerID="cri-o://165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.202925 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.203168 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" containerID="cri-o://585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.215471 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.215749 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wjwlb" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" containerID="cri-o://66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.222752 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nzltj"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.225360 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.232306 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.232662 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2tvd8" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" containerID="cri-o://e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.237232 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nzltj"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.304122 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.304186 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtq5q\" (UniqueName: \"kubernetes.io/projected/ea935cc6-1adc-4763-bf1c-8c08fec3894f-kube-api-access-gtq5q\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.304245 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.406198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.406379 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.406408 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtq5q\" (UniqueName: \"kubernetes.io/projected/ea935cc6-1adc-4763-bf1c-8c08fec3894f-kube-api-access-gtq5q\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.407659 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.415254 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.425716 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtq5q\" (UniqueName: \"kubernetes.io/projected/ea935cc6-1adc-4763-bf1c-8c08fec3894f-kube-api-access-gtq5q\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.545874 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.700256 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.813439 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.813596 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.813650 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.814465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities" (OuterVolumeSpecName: "utilities") pod "3641ad73-644b-4d71-860b-4d8b7e6a3a6d" (UID: "3641ad73-644b-4d71-860b-4d8b7e6a3a6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.817271 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm" (OuterVolumeSpecName: "kube-api-access-nmmtm") pod "3641ad73-644b-4d71-860b-4d8b7e6a3a6d" (UID: "3641ad73-644b-4d71-860b-4d8b7e6a3a6d"). InnerVolumeSpecName "kube-api-access-nmmtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.915110 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.915145 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.944758 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3641ad73-644b-4d71-860b-4d8b7e6a3a6d" (UID: "3641ad73-644b-4d71-860b-4d8b7e6a3a6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:58 crc kubenswrapper[4979]: W0130 21:46:58.948889 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea935cc6_1adc_4763_bf1c_8c08fec3894f.slice/crio-f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033 WatchSource:0}: Error finding container f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033: Status 404 returned error can't find the container with id f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.950404 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nzltj"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.016879 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.237682 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.238037 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerDied","Data":"585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.238003 4979 generic.go:334] "Generic (PLEG): container finished" podID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerID="585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243024 4979 generic.go:334] "Generic (PLEG): container finished" podID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243176 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243208 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"2225585b885540daf5c8798c55ba2f9f3246f245430840cea94336a10b265b9b"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243231 4979 scope.go:117] "RemoveContainer" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.258573 4979 generic.go:334] "Generic (PLEG): container finished" podID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerID="66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.258664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262811 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262889 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262919 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262998 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.265691 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" event={"ID":"ea935cc6-1adc-4763-bf1c-8c08fec3894f","Type":"ContainerStarted","Data":"f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.280132 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.282758 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerID="165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.282780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.284634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.291325 4979 scope.go:117] "RemoveContainer" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.322198 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.322273 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.322360 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.323053 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities" (OuterVolumeSpecName: "utilities") pod "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" (UID: "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.329860 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh" (OuterVolumeSpecName: "kube-api-access-2rtgh") pod "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" (UID: "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8"). InnerVolumeSpecName "kube-api-access-2rtgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.383977 4979 scope.go:117] "RemoveContainer" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.403474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" (UID: "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.413945 4979 scope.go:117] "RemoveContainer" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.414506 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777\": container with ID starting with e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777 not found: ID does not exist" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.414548 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777"} err="failed to get container status \"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777\": rpc error: code = NotFound desc = could not find container \"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777\": container with ID starting with e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.414570 4979 scope.go:117] "RemoveContainer" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.415384 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126\": container with ID starting with c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126 not found: ID does not exist" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415413 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126"} err="failed to get container status \"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126\": rpc error: code = NotFound desc = could not find container \"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126\": container with ID starting with c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415426 4979 scope.go:117] "RemoveContainer" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.415801 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c\": container with ID starting with f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c not found: ID does not exist" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415821 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c"} err="failed to get container status \"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c\": rpc error: code = NotFound desc = could not find container \"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c\": container with ID starting with f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415833 4979 scope.go:117] "RemoveContainer" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.423924 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.423956 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.423965 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.451183 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.456992 4979 scope.go:117] "RemoveContainer" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.461230 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.479016 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.488414 4979 scope.go:117] "RemoveContainer" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.512714 4979 scope.go:117] "RemoveContainer" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.513838 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631\": container with ID starting with 951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631 not found: ID does not exist" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.513888 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631"} err="failed to get container status \"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631\": rpc error: code = NotFound desc = could not find container \"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631\": container with ID starting with 951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.513918 4979 scope.go:117] "RemoveContainer" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.514255 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6\": container with ID starting with 4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6 not found: ID does not exist" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.514280 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6"} err="failed to get container status \"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6\": rpc error: code = NotFound desc = could not find container \"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6\": container with ID starting with 4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.514293 4979 scope.go:117] "RemoveContainer" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.514507 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1\": container with ID starting with d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1 not found: ID does not exist" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.514534 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1"} err="failed to get container status \"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1\": rpc error: code = NotFound desc = could not find container \"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1\": container with ID starting with d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525257 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"15489ac0-9ae3-4068-973c-fd1ea98642c3\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525379 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525411 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525471 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"9ced41eb-6843-4dfe-81c7-267a56f75a73\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525569 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"9ced41eb-6843-4dfe-81c7-267a56f75a73\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525591 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"15489ac0-9ae3-4068-973c-fd1ea98642c3\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525635 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525662 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"15489ac0-9ae3-4068-973c-fd1ea98642c3\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525700 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"9ced41eb-6843-4dfe-81c7-267a56f75a73\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.526265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "15489ac0-9ae3-4068-973c-fd1ea98642c3" (UID: "15489ac0-9ae3-4068-973c-fd1ea98642c3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.526548 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities" (OuterVolumeSpecName: "utilities") pod "cfb214a7-6df6-4fd6-a74c-db4f38b0a086" (UID: "cfb214a7-6df6-4fd6-a74c-db4f38b0a086"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.526641 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.527287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities" (OuterVolumeSpecName: "utilities") pod "9ced41eb-6843-4dfe-81c7-267a56f75a73" (UID: "9ced41eb-6843-4dfe-81c7-267a56f75a73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.528591 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66" (OuterVolumeSpecName: "kube-api-access-nls66") pod "cfb214a7-6df6-4fd6-a74c-db4f38b0a086" (UID: "cfb214a7-6df6-4fd6-a74c-db4f38b0a086"). InnerVolumeSpecName "kube-api-access-nls66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.528854 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn" (OuterVolumeSpecName: "kube-api-access-zltvn") pod "15489ac0-9ae3-4068-973c-fd1ea98642c3" (UID: "15489ac0-9ae3-4068-973c-fd1ea98642c3"). InnerVolumeSpecName "kube-api-access-zltvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.529007 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6" (OuterVolumeSpecName: "kube-api-access-snrx6") pod "9ced41eb-6843-4dfe-81c7-267a56f75a73" (UID: "9ced41eb-6843-4dfe-81c7-267a56f75a73"). InnerVolumeSpecName "kube-api-access-snrx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.529187 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "15489ac0-9ae3-4068-973c-fd1ea98642c3" (UID: "15489ac0-9ae3-4068-973c-fd1ea98642c3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.546623 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfb214a7-6df6-4fd6-a74c-db4f38b0a086" (UID: "cfb214a7-6df6-4fd6-a74c-db4f38b0a086"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.592198 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.595292 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.614017 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ced41eb-6843-4dfe-81c7-267a56f75a73" (UID: "9ced41eb-6843-4dfe-81c7-267a56f75a73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628589 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628636 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628650 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628666 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628681 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628693 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628704 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628716 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.291233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" event={"ID":"ea935cc6-1adc-4763-bf1c-8c08fec3894f","Type":"ContainerStarted","Data":"a98a4834f147d0c9448522daffe2683971e336e5c5349c1eb38bc8863c0ae3ef"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.292604 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.296316 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.297974 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"ef80ed7d6ea466150a57b7d4595c84c46d03f43e54dcb40334059a4c99c74be3"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.298072 4979 scope.go:117] "RemoveContainer" containerID="165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.298264 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.313612 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" podStartSLOduration=2.313590583 podStartE2EDuration="2.313590583s" podCreationTimestamp="2026-01-30 21:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:00.310759386 +0000 UTC m=+416.272006419" watchObservedRunningTime="2026-01-30 21:47:00.313590583 +0000 UTC m=+416.274837606" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.318153 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerDied","Data":"77916c27a3bed0009808e06c73482e7ba563d922fb5c460a56269b992ef94952"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.318277 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.325926 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"69b34253c166acfc981a0414523d053e63aae7c6e06110f5fe68cf8028008964"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.326017 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.340948 4979 scope.go:117] "RemoveContainer" containerID="9c8374b15b5619f4f1304cf75cea07e98769e40d36978831645aa6ad442f9748" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.370746 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.375177 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.399026 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.405282 4979 scope.go:117] "RemoveContainer" containerID="ac193c08f8b37b1caaa0e8f2fd6642d2080bfcadd0f1988fbb608a5fad551f06" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.406222 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.420879 4979 scope.go:117] "RemoveContainer" containerID="585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.425581 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.429510 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.438680 4979 scope.go:117] "RemoveContainer" containerID="66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.453494 4979 scope.go:117] "RemoveContainer" containerID="6777c7a712aaeb3b92c712ea13c14e93a0636f80d815df1f08df98f2e3cc68fe" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.468803 4979 scope.go:117] "RemoveContainer" containerID="79a85f996439ff844121a3f1030805086e2c3395fd9f9a97d7660f7b7319ecdd" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.590866 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591122 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591136 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591149 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591155 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591169 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591175 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591185 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591190 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591199 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591204 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591211 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591217 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591225 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591233 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591244 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591258 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591265 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591273 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591279 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591288 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591293 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591300 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591305 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591313 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591319 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591423 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591434 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591440 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591450 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591461 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.592278 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.596679 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.599820 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.641762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.641828 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.641858 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.742834 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.742950 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.743006 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.743612 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.743805 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.761358 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.907683 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.095448 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" path="/var/lib/kubelet/pods/15489ac0-9ae3-4068-973c-fd1ea98642c3/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.096540 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" path="/var/lib/kubelet/pods/3641ad73-644b-4d71-860b-4d8b7e6a3a6d/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.097219 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" path="/var/lib/kubelet/pods/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.098359 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" path="/var/lib/kubelet/pods/9ced41eb-6843-4dfe-81c7-267a56f75a73/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.099018 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" path="/var/lib/kubelet/pods/cfb214a7-6df6-4fd6-a74c-db4f38b0a086/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.301350 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.335020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerStarted","Data":"839a0e21c6342d6c49c0683bac9adda801e1ebfd8079dc25226f6fa62891ca90"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.039479 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.039539 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.039591 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.040299 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.040383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b" gracePeriod=600 Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.344907 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b" exitCode=0 Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.344991 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.345284 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.345308 4979 scope.go:117] "RemoveContainer" containerID="92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.359723 4979 generic.go:334] "Generic (PLEG): container finished" podID="135dc03e-075f-41a4-934c-8d914d497f69" containerID="2775cfa6f3efbca70770c0157c242e36a5de365efbaf9c6628031b3077d49317" exitCode=0 Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.359849 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"2775cfa6f3efbca70770c0157c242e36a5de365efbaf9c6628031b3077d49317"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.395025 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8s6x"] Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.402544 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.405567 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.413057 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8s6x"] Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.466832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-utilities\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.467259 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-catalog-content\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.467377 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2np9\" (UniqueName: \"kubernetes.io/projected/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-kube-api-access-t2np9\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.568475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-catalog-content\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.568942 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2np9\" (UniqueName: \"kubernetes.io/projected/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-kube-api-access-t2np9\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.569008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-utilities\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.569487 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-catalog-content\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.569545 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-utilities\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.592874 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2np9\" (UniqueName: \"kubernetes.io/projected/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-kube-api-access-t2np9\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.720657 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.996814 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfnsx"] Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.998292 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.001142 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.002026 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfnsx"] Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.079157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-utilities\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.079246 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4md\" (UniqueName: \"kubernetes.io/projected/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-kube-api-access-xb4md\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.079286 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-catalog-content\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.115923 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8s6x"] Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.180848 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-catalog-content\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.180900 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-utilities\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.180981 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4md\" (UniqueName: \"kubernetes.io/projected/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-kube-api-access-xb4md\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.181467 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-catalog-content\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.181522 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-utilities\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.202192 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4md\" (UniqueName: \"kubernetes.io/projected/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-kube-api-access-xb4md\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.315506 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.365780 4979 generic.go:334] "Generic (PLEG): container finished" podID="ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d" containerID="7b99e97c5de516482b73f4c44ef1aba9c5e09ade1d5185a17072c1d139a4b9a5" exitCode=0 Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.365859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerDied","Data":"7b99e97c5de516482b73f4c44ef1aba9c5e09ade1d5185a17072c1d139a4b9a5"} Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.365884 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerStarted","Data":"708050d220016c9693a3f7d1f85a1117831ba0120073a81316a37b568c72fe7f"} Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.368556 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerStarted","Data":"d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a"} Jan 30 21:47:03 crc kubenswrapper[4979]: W0130 21:47:03.763360 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb5ba6de_4ef3_49a4_bd09_1ca00d210025.slice/crio-8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8 WatchSource:0}: Error finding container 8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8: Status 404 returned error can't find the container with id 8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8 Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.765624 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfnsx"] Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.381772 4979 generic.go:334] "Generic (PLEG): container finished" podID="eb5ba6de-4ef3-49a4-bd09-1ca00d210025" containerID="f37e167a660c612d1348cb7e55c35fbb6038e87ee45d4ece748a0cf2d0fa1d4a" exitCode=0 Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.381845 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerDied","Data":"f37e167a660c612d1348cb7e55c35fbb6038e87ee45d4ece748a0cf2d0fa1d4a"} Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.381897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerStarted","Data":"8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8"} Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.383842 4979 generic.go:334] "Generic (PLEG): container finished" podID="135dc03e-075f-41a4-934c-8d914d497f69" containerID="d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a" exitCode=0 Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.383903 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a"} Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.787834 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jr2p"] Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.789188 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.791945 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.805155 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jr2p"] Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.809762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-utilities\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.809814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwwk\" (UniqueName: \"kubernetes.io/projected/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-kube-api-access-gvwwk\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.809909 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-catalog-content\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.910953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-utilities\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911013 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwwk\" (UniqueName: \"kubernetes.io/projected/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-kube-api-access-gvwwk\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-catalog-content\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911605 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-catalog-content\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911878 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-utilities\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.938567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwwk\" (UniqueName: \"kubernetes.io/projected/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-kube-api-access-gvwwk\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.112955 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.121667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.221131 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" containerID="cri-o://772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb" gracePeriod=30 Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.390364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerStarted","Data":"eeb154b4cda9a7e6e8b3aaefc06a886e9df015be60988d21b2794ff047bde9ff"} Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.392346 4979 generic.go:334] "Generic (PLEG): container finished" podID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerID="772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb" exitCode=0 Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.392403 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerDied","Data":"772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb"} Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.510644 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jr2p"] Jan 30 21:47:05 crc kubenswrapper[4979]: W0130 21:47:05.518306 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ed839b_8a68_4f8d_b12b_dac0b2fae9d9.slice/crio-02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933 WatchSource:0}: Error finding container 02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933: Status 404 returned error can't find the container with id 02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933 Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.324612 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329197 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329237 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329308 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329339 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329363 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329397 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329546 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.330661 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.330680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.354222 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.354941 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.354985 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk" (OuterVolumeSpecName: "kube-api-access-s5jlk") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "kube-api-access-s5jlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.355306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.355532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.357500 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.398884 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerDied","Data":"7b232422461df3a64ba9f7d1e8e42a5bbd92a1d12e44b90cbcab93e3d93f6389"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.398942 4979 scope.go:117] "RemoveContainer" containerID="772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.398948 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.402111 4979 generic.go:334] "Generic (PLEG): container finished" podID="ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d" containerID="eeb154b4cda9a7e6e8b3aaefc06a886e9df015be60988d21b2794ff047bde9ff" exitCode=0 Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.402159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerDied","Data":"eeb154b4cda9a7e6e8b3aaefc06a886e9df015be60988d21b2794ff047bde9ff"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.403846 4979 generic.go:334] "Generic (PLEG): container finished" podID="f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9" containerID="c9a57a1bac9c8bc957a5e2f3b7739ffde4af68eeff931e2133843031cbf28f88" exitCode=0 Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.403929 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerDied","Data":"c9a57a1bac9c8bc957a5e2f3b7739ffde4af68eeff931e2133843031cbf28f88"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.403963 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerStarted","Data":"02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.406189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerStarted","Data":"987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432831 4979 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432894 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432911 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432929 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432941 4979 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432953 4979 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432966 4979 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.466087 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvj6v" podStartSLOduration=3.03537498 podStartE2EDuration="6.466062665s" podCreationTimestamp="2026-01-30 21:47:00 +0000 UTC" firstStartedPulling="2026-01-30 21:47:02.367540258 +0000 UTC m=+418.328787291" lastFinishedPulling="2026-01-30 21:47:05.798227943 +0000 UTC m=+421.759474976" observedRunningTime="2026-01-30 21:47:06.465084628 +0000 UTC m=+422.426331681" watchObservedRunningTime="2026-01-30 21:47:06.466062665 +0000 UTC m=+422.427309688" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.485952 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.489704 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:47:07 crc kubenswrapper[4979]: I0130 21:47:07.164425 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" path="/var/lib/kubelet/pods/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08/volumes" Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.421259 4979 generic.go:334] "Generic (PLEG): container finished" podID="eb5ba6de-4ef3-49a4-bd09-1ca00d210025" containerID="c312300789c563f974e43fa424703d387daec72910ff9772f82e77c140ece03e" exitCode=0 Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.421325 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerDied","Data":"c312300789c563f974e43fa424703d387daec72910ff9772f82e77c140ece03e"} Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.426394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerStarted","Data":"0ea2b5bbf15e4bc7b0a93651ab8f32f9f56af3fb66e4f350462b29d8244055ef"} Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.458928 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8s6x" podStartSLOduration=1.929896502 podStartE2EDuration="6.458903423s" podCreationTimestamp="2026-01-30 21:47:02 +0000 UTC" firstStartedPulling="2026-01-30 21:47:03.368848508 +0000 UTC m=+419.330095541" lastFinishedPulling="2026-01-30 21:47:07.897855429 +0000 UTC m=+423.859102462" observedRunningTime="2026-01-30 21:47:08.455621885 +0000 UTC m=+424.416868918" watchObservedRunningTime="2026-01-30 21:47:08.458903423 +0000 UTC m=+424.420150456" Jan 30 21:47:10 crc kubenswrapper[4979]: I0130 21:47:10.908684 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:10 crc kubenswrapper[4979]: I0130 21:47:10.909284 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:10 crc kubenswrapper[4979]: I0130 21:47:10.947827 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.147480 4979 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podde06742d-2533-4510-abec-ff0f35d84a45"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podde06742d-2533-4510-abec-ff0f35d84a45] : Timed out while waiting for systemd to remove kubepods-burstable-podde06742d_2533_4510_abec_ff0f35d84a45.slice" Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.444485 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerStarted","Data":"7739c01b720ac89e7854c2e1880d8a3f2cf3ec49342814a15cddb7299ac74dd2"} Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.446487 4979 generic.go:334] "Generic (PLEG): container finished" podID="f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9" containerID="1f33be4ed9755d85029de88fd7a0300d054960f16f952a88399c3e320da1c161" exitCode=0 Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.446689 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerDied","Data":"1f33be4ed9755d85029de88fd7a0300d054960f16f952a88399c3e320da1c161"} Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.471662 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfnsx" podStartSLOduration=4.276373873 podStartE2EDuration="9.471637223s" podCreationTimestamp="2026-01-30 21:47:02 +0000 UTC" firstStartedPulling="2026-01-30 21:47:04.401891965 +0000 UTC m=+420.363138998" lastFinishedPulling="2026-01-30 21:47:09.597155315 +0000 UTC m=+425.558402348" observedRunningTime="2026-01-30 21:47:11.467949853 +0000 UTC m=+427.429196886" watchObservedRunningTime="2026-01-30 21:47:11.471637223 +0000 UTC m=+427.432884286" Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.505097 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:12 crc kubenswrapper[4979]: I0130 21:47:12.721502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:12 crc kubenswrapper[4979]: I0130 21:47:12.721903 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.316146 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.316215 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.362286 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.457440 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerStarted","Data":"32130af28b1071620871ecc1045d52462bb2aff28cda2c02f94f3d183a6bc005"} Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.480259 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jr2p" podStartSLOduration=3.664410297 podStartE2EDuration="9.480238255s" podCreationTimestamp="2026-01-30 21:47:04 +0000 UTC" firstStartedPulling="2026-01-30 21:47:06.876631222 +0000 UTC m=+422.837878255" lastFinishedPulling="2026-01-30 21:47:12.69245918 +0000 UTC m=+428.653706213" observedRunningTime="2026-01-30 21:47:13.47264137 +0000 UTC m=+429.433888403" watchObservedRunningTime="2026-01-30 21:47:13.480238255 +0000 UTC m=+429.441485308" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.793192 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8s6x" podUID="ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d" containerName="registry-server" probeResult="failure" output=< Jan 30 21:47:13 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 21:47:13 crc kubenswrapper[4979]: > Jan 30 21:47:15 crc kubenswrapper[4979]: I0130 21:47:15.122806 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:15 crc kubenswrapper[4979]: I0130 21:47:15.123144 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:15 crc kubenswrapper[4979]: I0130 21:47:15.165821 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:22 crc kubenswrapper[4979]: I0130 21:47:22.772692 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:22 crc kubenswrapper[4979]: I0130 21:47:22.825494 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:23 crc kubenswrapper[4979]: I0130 21:47:23.363218 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:25 crc kubenswrapper[4979]: I0130 21:47:25.198679 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:49:02 crc kubenswrapper[4979]: I0130 21:49:02.039858 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:02 crc kubenswrapper[4979]: I0130 21:49:02.040705 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:49:05 crc kubenswrapper[4979]: I0130 21:49:05.424912 4979 scope.go:117] "RemoveContainer" containerID="1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c" Jan 30 21:49:05 crc kubenswrapper[4979]: I0130 21:49:05.454806 4979 scope.go:117] "RemoveContainer" containerID="0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e" Jan 30 21:49:32 crc kubenswrapper[4979]: I0130 21:49:32.040204 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:32 crc kubenswrapper[4979]: I0130 21:49:32.041134 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.040311 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.041219 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.041315 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.041974 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.042069 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43" gracePeriod=600 Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.883413 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43" exitCode=0 Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.883512 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43"} Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.884434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222"} Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.884480 4979 scope.go:117] "RemoveContainer" containerID="1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b" Jan 30 21:52:02 crc kubenswrapper[4979]: I0130 21:52:02.039516 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:52:02 crc kubenswrapper[4979]: I0130 21:52:02.041194 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:52:32 crc kubenswrapper[4979]: I0130 21:52:32.040180 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:52:32 crc kubenswrapper[4979]: I0130 21:52:32.041172 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:52:40 crc kubenswrapper[4979]: I0130 21:52:40.875595 4979 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.039521 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.040593 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.040685 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.041826 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.041938 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222" gracePeriod=600 Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.246706 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222" exitCode=0 Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.246920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222"} Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.247322 4979 scope.go:117] "RemoveContainer" containerID="de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43" Jan 30 21:53:03 crc kubenswrapper[4979]: I0130 21:53:03.265254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.162191 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.167856 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" containerID="cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.167891 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" containerID="cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168016 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" containerID="cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168099 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" containerID="cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168157 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168204 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" containerID="cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168280 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" containerID="cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.222238 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" containerID="cri-o://924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.868338 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.875821 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-acl-logging/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.876513 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-controller/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.877154 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.935657 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r6k8t"] Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.935978 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.935997 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936011 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936019 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936046 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936055 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936065 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kubecfg-setup" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936072 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kubecfg-setup" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936082 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936089 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936098 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936105 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936160 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936170 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936182 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936332 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936346 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936355 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936376 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936384 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936394 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936409 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936417 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936576 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936586 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936594 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936608 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936618 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936628 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936640 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936650 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936663 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936671 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936680 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936689 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936922 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936937 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936951 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936958 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.937106 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.941837 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.943985 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-log-socket\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944072 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2f440e9-633b-41c3-ba83-2f6195004621-ovn-node-metrics-cert\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944133 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-kubelet\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-node-log\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944227 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-script-lib\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-var-lib-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-etc-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944372 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-netns\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-bin\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944445 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-slash\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-config\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944497 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-systemd-units\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944527 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944556 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677b9\" (UniqueName: \"kubernetes.io/projected/d2f440e9-633b-41c3-ba83-2f6195004621-kube-api-access-677b9\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944588 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-ovn\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944708 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-systemd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-netd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944857 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-env-overrides\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.983825 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/2.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984494 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984564 4979 generic.go:334] "Generic (PLEG): container finished" podID="6722e8df-a635-4808-b6b9-d5633fc3d34b" containerID="63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0" exitCode=2 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984636 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerDied","Data":"63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984696 4979 scope.go:117] "RemoveContainer" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.985797 4979 scope.go:117] "RemoveContainer" containerID="63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.987919 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.990215 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-acl-logging/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.990598 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-controller/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992293 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992316 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992323 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992332 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992361 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992369 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992377 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" exitCode=143 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992383 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" exitCode=143 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992400 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992449 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992458 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992456 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992581 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992595 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992608 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992614 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992620 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992625 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992632 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992638 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992644 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992651 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992657 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992668 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992677 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992684 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992691 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992697 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992704 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992711 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992718 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992725 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992732 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992739 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992750 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992762 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992769 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992776 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992783 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992789 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992795 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992801 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992810 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992817 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992824 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992833 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992843 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992852 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992858 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992864 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992872 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992880 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992888 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992896 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992902 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992908 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.027427 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.045598 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046004 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046085 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046102 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046141 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046163 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046210 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046374 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046669 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046750 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.047948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048064 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048106 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048123 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048140 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048230 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048249 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048178 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048195 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048326 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048430 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048354 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket" (OuterVolumeSpecName: "log-socket") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048392 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048550 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048815 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048454 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048881 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048899 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048976 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash" (OuterVolumeSpecName: "host-slash") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048916 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log" (OuterVolumeSpecName: "node-log") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049259 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-netd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-env-overrides\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049343 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-netd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-log-socket\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049467 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2f440e9-633b-41c3-ba83-2f6195004621-ovn-node-metrics-cert\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049494 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-kubelet\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049508 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049537 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-log-socket\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-kubelet\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049791 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-node-log\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049876 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-node-log\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049939 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-script-lib\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049965 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-var-lib-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049996 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-env-overrides\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050059 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-etc-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050066 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050083 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-netns\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050109 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-var-lib-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050121 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-bin\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050137 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-netns\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050149 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-slash\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050161 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-etc-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050184 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-bin\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050215 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-config\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050240 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-systemd-units\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677b9\" (UniqueName: \"kubernetes.io/projected/d2f440e9-633b-41c3-ba83-2f6195004621-kube-api-access-677b9\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-ovn\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050349 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-systemd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050498 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-script-lib\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050640 4979 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050664 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-systemd-units\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050681 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-systemd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050700 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-ovn\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050717 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-slash\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050732 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050748 4979 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050761 4979 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050774 4979 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050787 4979 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050795 4979 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050804 4979 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050814 4979 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050823 4979 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050832 4979 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050844 4979 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050857 4979 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050869 4979 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050881 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050890 4979 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050900 4979 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-config\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.053265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r" (OuterVolumeSpecName: "kube-api-access-5gg6r") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "kube-api-access-5gg6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.053938 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2f440e9-633b-41c3-ba83-2f6195004621-ovn-node-metrics-cert\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.061289 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.066378 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.068152 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.070682 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677b9\" (UniqueName: \"kubernetes.io/projected/d2f440e9-633b-41c3-ba83-2f6195004621-kube-api-access-677b9\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.087835 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.102093 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.115631 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.150386 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.154756 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.154783 4979 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.154798 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.166746 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.179549 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.195204 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.208289 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.208766 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.208806 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.208835 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.209147 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209169 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209185 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.209399 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209424 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209440 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.209666 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209687 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209702 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210027 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210061 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210074 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210291 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210312 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210323 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210485 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210509 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210523 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210697 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210723 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210738 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210936 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210956 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210969 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.211154 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211175 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211186 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211357 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211373 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211563 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211587 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211769 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211786 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211948 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211971 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212177 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212200 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212440 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212466 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212668 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212688 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212839 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212859 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213013 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213048 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213213 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213232 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213444 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213474 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214049 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214105 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214410 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214432 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214622 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214642 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214866 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214883 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215083 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215099 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215268 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215285 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215438 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215463 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215628 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215645 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215841 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215859 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216008 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216025 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216264 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216284 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216525 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216552 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216746 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216767 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216923 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216943 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217157 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217173 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217333 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217351 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217515 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217533 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217699 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217717 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217929 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.257556 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: W0130 21:54:39.277110 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f440e9_633b_41c3_ba83_2f6195004621.slice/crio-68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec WatchSource:0}: Error finding container 68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec: Status 404 returned error can't find the container with id 68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.359839 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.368591 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.001364 4979 generic.go:334] "Generic (PLEG): container finished" podID="d2f440e9-633b-41c3-ba83-2f6195004621" containerID="9680da53a0072a0af82bfce315277c7afb6976e51132839d2841b4f0e32443f0" exitCode=0 Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.001469 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerDied","Data":"9680da53a0072a0af82bfce315277c7afb6976e51132839d2841b4f0e32443f0"} Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.001885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec"} Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.005879 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/2.log" Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.005975 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"a648a1eb896eede38c93068819f0a43dcb99f6f9b3238b3b3b8e7809fbcad058"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"af08390c250f77eff24182962a6e359d6b0cb16fa868bb928a683b7b8323ecef"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015694 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"6c07ebecd69fc763f832be746633a10addbf9766e08f97166d597636e225949f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"dc054c51d90538de6f6427e3d1e3cbd73304ef4d999826fefd09881596a2d10f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015717 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"008ea364b9b5969ccf3c5ae14e9ee0742d62d5b5e4778de92aa0ac20ce39927f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"8537d3457c3cf1fa642e4dfdba5ffdd2845d49470adecfe3dfa4e5916f41025f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015737 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"bd873d06f285769c04a9f10e3b6bce4034ae849d3c9d8be50b28bc09db86cbbd"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.076895 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" path="/var/lib/kubelet/pods/34ce4851-1ecc-47da-89ca-09894eb0908a/volumes" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.343168 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.345346 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.347744 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.349052 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.349129 4979 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-jpprx" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.349384 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.419105 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.419190 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.419219 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.520501 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.520574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.520677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.521023 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.521831 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.547856 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.664328 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707148 4979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707273 4979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707314 4979 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707406 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-sr9vn" podUID="55b164f6-7e71-4403-9598-6673cea6876e" Jan 30 21:54:44 crc kubenswrapper[4979]: I0130 21:54:44.041251 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"a28a10e271e9ed978b0682bd3665e6f3d83f376ede5a973f88697477ecbd6431"} Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.066504 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"ce791bd174715dd1afd21de50ab981e47a723de17d179408b2e0b5298441e592"} Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.069242 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.069825 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.114237 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.122381 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" podStartSLOduration=8.122357196 podStartE2EDuration="8.122357196s" podCreationTimestamp="2026-01-30 21:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:54:46.116392058 +0000 UTC m=+882.077639121" watchObservedRunningTime="2026-01-30 21:54:46.122357196 +0000 UTC m=+882.083604229" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.875862 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.876000 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.876557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.921699 4979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.934160 4979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.934223 4979 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.934293 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-sr9vn" podUID="55b164f6-7e71-4403-9598-6673cea6876e" Jan 30 21:54:47 crc kubenswrapper[4979]: I0130 21:54:47.079177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:47 crc kubenswrapper[4979]: I0130 21:54:47.109832 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.417730 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.420296 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.440357 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.507665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.508060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.508206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.609815 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610325 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610559 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610879 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.635655 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.744907 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:55 crc kubenswrapper[4979]: I0130 21:54:55.034335 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:54:55 crc kubenswrapper[4979]: W0130 21:54:55.054492 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c0d611_5c8f_4ae6_93d4_d5029516ea1e.slice/crio-ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df WatchSource:0}: Error finding container ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df: Status 404 returned error can't find the container with id ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df Jan 30 21:54:55 crc kubenswrapper[4979]: I0130 21:54:55.129295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerStarted","Data":"ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df"} Jan 30 21:54:56 crc kubenswrapper[4979]: I0130 21:54:56.153967 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf"} Jan 30 21:54:56 crc kubenswrapper[4979]: I0130 21:54:56.153970 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" exitCode=0 Jan 30 21:54:56 crc kubenswrapper[4979]: I0130 21:54:56.157240 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:54:57 crc kubenswrapper[4979]: I0130 21:54:57.162667 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" exitCode=0 Jan 30 21:54:57 crc kubenswrapper[4979]: I0130 21:54:57.163170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8"} Jan 30 21:54:58 crc kubenswrapper[4979]: I0130 21:54:58.171821 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerStarted","Data":"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752"} Jan 30 21:54:58 crc kubenswrapper[4979]: I0130 21:54:58.203026 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stz2f" podStartSLOduration=2.768167596 podStartE2EDuration="4.202998294s" podCreationTimestamp="2026-01-30 21:54:54 +0000 UTC" firstStartedPulling="2026-01-30 21:54:56.156946948 +0000 UTC m=+892.118193981" lastFinishedPulling="2026-01-30 21:54:57.591777636 +0000 UTC m=+893.553024679" observedRunningTime="2026-01-30 21:54:58.201067013 +0000 UTC m=+894.162314056" watchObservedRunningTime="2026-01-30 21:54:58.202998294 +0000 UTC m=+894.164245357" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.039632 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.040255 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.069873 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.070845 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.546828 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 21:55:03 crc kubenswrapper[4979]: I0130 21:55:03.221408 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sr9vn" event={"ID":"55b164f6-7e71-4403-9598-6673cea6876e","Type":"ContainerStarted","Data":"ccc67f80dbda21ecf36ae40de3aab4b305feec6ba1350334879156336efd5488"} Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.228765 4979 generic.go:334] "Generic (PLEG): container finished" podID="55b164f6-7e71-4403-9598-6673cea6876e" containerID="f69e5e60ca65ac037198a7875cb73ae5dd60bb9ab12c82aead51159afd7e44ab" exitCode=0 Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.229255 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sr9vn" event={"ID":"55b164f6-7e71-4403-9598-6673cea6876e","Type":"ContainerDied","Data":"f69e5e60ca65ac037198a7875cb73ae5dd60bb9ab12c82aead51159afd7e44ab"} Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.745515 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.745592 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.793201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.298866 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.356203 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.577410 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.682503 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"55b164f6-7e71-4403-9598-6673cea6876e\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.682616 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"55b164f6-7e71-4403-9598-6673cea6876e\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.682732 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"55b164f6-7e71-4403-9598-6673cea6876e\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.683244 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "55b164f6-7e71-4403-9598-6673cea6876e" (UID: "55b164f6-7e71-4403-9598-6673cea6876e"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.691615 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4" (OuterVolumeSpecName: "kube-api-access-w8kd4") pod "55b164f6-7e71-4403-9598-6673cea6876e" (UID: "55b164f6-7e71-4403-9598-6673cea6876e"). InnerVolumeSpecName "kube-api-access-w8kd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.705011 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "55b164f6-7e71-4403-9598-6673cea6876e" (UID: "55b164f6-7e71-4403-9598-6673cea6876e"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.784746 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.784807 4979 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.784826 4979 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:06 crc kubenswrapper[4979]: I0130 21:55:06.248768 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:06 crc kubenswrapper[4979]: I0130 21:55:06.248789 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sr9vn" event={"ID":"55b164f6-7e71-4403-9598-6673cea6876e","Type":"ContainerDied","Data":"ccc67f80dbda21ecf36ae40de3aab4b305feec6ba1350334879156336efd5488"} Jan 30 21:55:06 crc kubenswrapper[4979]: I0130 21:55:06.248888 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc67f80dbda21ecf36ae40de3aab4b305feec6ba1350334879156336efd5488" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.255409 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stz2f" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" containerID="cri-o://62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" gracePeriod=2 Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.706909 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.819253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.819316 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.819390 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.821444 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities" (OuterVolumeSpecName: "utilities") pod "93c0d611-5c8f-4ae6-93d4-d5029516ea1e" (UID: "93c0d611-5c8f-4ae6-93d4-d5029516ea1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.829276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv" (OuterVolumeSpecName: "kube-api-access-4q9sv") pod "93c0d611-5c8f-4ae6-93d4-d5029516ea1e" (UID: "93c0d611-5c8f-4ae6-93d4-d5029516ea1e"). InnerVolumeSpecName "kube-api-access-4q9sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.921591 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.921671 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265201 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" exitCode=0 Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752"} Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265344 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265382 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df"} Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265454 4979 scope.go:117] "RemoveContainer" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.293998 4979 scope.go:117] "RemoveContainer" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.322438 4979 scope.go:117] "RemoveContainer" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.346838 4979 scope.go:117] "RemoveContainer" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" Jan 30 21:55:08 crc kubenswrapper[4979]: E0130 21:55:08.347730 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752\": container with ID starting with 62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752 not found: ID does not exist" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.347823 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752"} err="failed to get container status \"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752\": rpc error: code = NotFound desc = could not find container \"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752\": container with ID starting with 62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752 not found: ID does not exist" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.347868 4979 scope.go:117] "RemoveContainer" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" Jan 30 21:55:08 crc kubenswrapper[4979]: E0130 21:55:08.348713 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8\": container with ID starting with 65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8 not found: ID does not exist" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.348776 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8"} err="failed to get container status \"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8\": rpc error: code = NotFound desc = could not find container \"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8\": container with ID starting with 65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8 not found: ID does not exist" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.348816 4979 scope.go:117] "RemoveContainer" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" Jan 30 21:55:08 crc kubenswrapper[4979]: E0130 21:55:08.349729 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf\": container with ID starting with 0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf not found: ID does not exist" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.349762 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf"} err="failed to get container status \"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf\": rpc error: code = NotFound desc = could not find container \"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf\": container with ID starting with 0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf not found: ID does not exist" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.859655 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c0d611-5c8f-4ae6-93d4-d5029516ea1e" (UID: "93c0d611-5c8f-4ae6-93d4-d5029516ea1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.927504 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.938946 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.940181 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:09 crc kubenswrapper[4979]: I0130 21:55:09.082156 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" path="/var/lib/kubelet/pods/93c0d611-5c8f-4ae6-93d4-d5029516ea1e/volumes" Jan 30 21:55:09 crc kubenswrapper[4979]: I0130 21:55:09.282110 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.428770 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9"] Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429445 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b164f6-7e71-4403-9598-6673cea6876e" containerName="storage" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429493 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b164f6-7e71-4403-9598-6673cea6876e" containerName="storage" Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429509 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-content" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429521 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-content" Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429536 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-utilities" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429545 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-utilities" Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429559 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429569 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429700 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429713 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b164f6-7e71-4403-9598-6673cea6876e" containerName="storage" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.430930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.434566 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.439919 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9"] Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.594796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.594898 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.595359 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.697838 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.698085 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.698162 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.699636 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.699718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.723952 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.752417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:13 crc kubenswrapper[4979]: I0130 21:55:13.051831 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9"] Jan 30 21:55:13 crc kubenswrapper[4979]: I0130 21:55:13.305700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerStarted","Data":"91bb921e4350bacb30f3ab5fa2b4c1c8cc38f05ec6f493986bfecc12204a0dfd"} Jan 30 21:55:13 crc kubenswrapper[4979]: I0130 21:55:13.306223 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerStarted","Data":"d0fb9a08ccc09bee63c7ea1c38b7828a31ec56f46b1f756a4c20c8dafcd8507b"} Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.315952 4979 generic.go:334] "Generic (PLEG): container finished" podID="24460103-3748-49b9-9231-5a6e63ede52c" containerID="91bb921e4350bacb30f3ab5fa2b4c1c8cc38f05ec6f493986bfecc12204a0dfd" exitCode=0 Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.316019 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"91bb921e4350bacb30f3ab5fa2b4c1c8cc38f05ec6f493986bfecc12204a0dfd"} Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.641597 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.643356 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.666004 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.752883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.753389 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.753474 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.854416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.854472 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.854533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.855287 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.855515 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.890373 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.962685 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:15 crc kubenswrapper[4979]: I0130 21:55:15.203912 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:15 crc kubenswrapper[4979]: W0130 21:55:15.211118 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod673080e1_83e2_49f1_9c9a_713fb9367bea.slice/crio-eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23 WatchSource:0}: Error finding container eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23: Status 404 returned error can't find the container with id eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23 Jan 30 21:55:15 crc kubenswrapper[4979]: I0130 21:55:15.321984 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerStarted","Data":"eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23"} Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.330423 4979 generic.go:334] "Generic (PLEG): container finished" podID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" exitCode=0 Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.330545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988"} Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.333288 4979 generic.go:334] "Generic (PLEG): container finished" podID="24460103-3748-49b9-9231-5a6e63ede52c" containerID="a82e023663383677302026ce5a7796bcf301b4b1d7880563e3e891cec23be5d4" exitCode=0 Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.333385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"a82e023663383677302026ce5a7796bcf301b4b1d7880563e3e891cec23be5d4"} Jan 30 21:55:17 crc kubenswrapper[4979]: I0130 21:55:17.345542 4979 generic.go:334] "Generic (PLEG): container finished" podID="24460103-3748-49b9-9231-5a6e63ede52c" containerID="4abd4f323f30af2af9b11b46065d16ea9b02941c97bba6155cee77d904dac6f1" exitCode=0 Jan 30 21:55:17 crc kubenswrapper[4979]: I0130 21:55:17.345632 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"4abd4f323f30af2af9b11b46065d16ea9b02941c97bba6155cee77d904dac6f1"} Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.354485 4979 generic.go:334] "Generic (PLEG): container finished" podID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" exitCode=0 Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.354595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60"} Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.602757 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.713842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"24460103-3748-49b9-9231-5a6e63ede52c\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.713988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"24460103-3748-49b9-9231-5a6e63ede52c\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.714094 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"24460103-3748-49b9-9231-5a6e63ede52c\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.717070 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle" (OuterVolumeSpecName: "bundle") pod "24460103-3748-49b9-9231-5a6e63ede52c" (UID: "24460103-3748-49b9-9231-5a6e63ede52c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.722727 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht" (OuterVolumeSpecName: "kube-api-access-ss7ht") pod "24460103-3748-49b9-9231-5a6e63ede52c" (UID: "24460103-3748-49b9-9231-5a6e63ede52c"). InnerVolumeSpecName "kube-api-access-ss7ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.797366 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util" (OuterVolumeSpecName: "util") pod "24460103-3748-49b9-9231-5a6e63ede52c" (UID: "24460103-3748-49b9-9231-5a6e63ede52c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.815374 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.815416 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.815426 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.362939 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"d0fb9a08ccc09bee63c7ea1c38b7828a31ec56f46b1f756a4c20c8dafcd8507b"} Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.362996 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fb9a08ccc09bee63c7ea1c38b7828a31ec56f46b1f756a4c20c8dafcd8507b" Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.363052 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.365428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerStarted","Data":"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742"} Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.387913 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fhr5r" podStartSLOduration=2.898179854 podStartE2EDuration="5.38788914s" podCreationTimestamp="2026-01-30 21:55:14 +0000 UTC" firstStartedPulling="2026-01-30 21:55:16.334867083 +0000 UTC m=+912.296114136" lastFinishedPulling="2026-01-30 21:55:18.824576389 +0000 UTC m=+914.785823422" observedRunningTime="2026-01-30 21:55:19.385623429 +0000 UTC m=+915.346870462" watchObservedRunningTime="2026-01-30 21:55:19.38788914 +0000 UTC m=+915.349136173" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.838312 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tv5t2"] Jan 30 21:55:23 crc kubenswrapper[4979]: E0130 21:55:23.839255 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="extract" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839283 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="extract" Jan 30 21:55:23 crc kubenswrapper[4979]: E0130 21:55:23.839308 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="util" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839320 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="util" Jan 30 21:55:23 crc kubenswrapper[4979]: E0130 21:55:23.839356 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="pull" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839370 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="pull" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839530 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="extract" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.840302 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.844521 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-57qqj" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.845396 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.847479 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.851772 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tv5t2"] Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.988159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxcp\" (UniqueName: \"kubernetes.io/projected/949791a2-d4bd-4ec8-8e34-70a2d0af1af1-kube-api-access-ggxcp\") pod \"nmstate-operator-646758c888-tv5t2\" (UID: \"949791a2-d4bd-4ec8-8e34-70a2d0af1af1\") " pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.090269 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxcp\" (UniqueName: \"kubernetes.io/projected/949791a2-d4bd-4ec8-8e34-70a2d0af1af1-kube-api-access-ggxcp\") pod \"nmstate-operator-646758c888-tv5t2\" (UID: \"949791a2-d4bd-4ec8-8e34-70a2d0af1af1\") " pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.118629 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxcp\" (UniqueName: \"kubernetes.io/projected/949791a2-d4bd-4ec8-8e34-70a2d0af1af1-kube-api-access-ggxcp\") pod \"nmstate-operator-646758c888-tv5t2\" (UID: \"949791a2-d4bd-4ec8-8e34-70a2d0af1af1\") " pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.161960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.367252 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tv5t2"] Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.402613 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" event={"ID":"949791a2-d4bd-4ec8-8e34-70a2d0af1af1","Type":"ContainerStarted","Data":"9596bc9f0d3c52c86e25051a44114fef18caae47d8a09046b38a088865ba0fd1"} Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.963734 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.964325 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:26 crc kubenswrapper[4979]: I0130 21:55:26.007793 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhr5r" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" probeResult="failure" output=< Jan 30 21:55:26 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 21:55:26 crc kubenswrapper[4979]: > Jan 30 21:55:28 crc kubenswrapper[4979]: I0130 21:55:28.458172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" event={"ID":"949791a2-d4bd-4ec8-8e34-70a2d0af1af1","Type":"ContainerStarted","Data":"c6e8b443a5be98f70ec81521d22fa2448f8261d24e12dffec37095d2f1d194e7"} Jan 30 21:55:28 crc kubenswrapper[4979]: I0130 21:55:28.485736 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" podStartSLOduration=2.177794615 podStartE2EDuration="5.485697904s" podCreationTimestamp="2026-01-30 21:55:23 +0000 UTC" firstStartedPulling="2026-01-30 21:55:24.385900775 +0000 UTC m=+920.347147818" lastFinishedPulling="2026-01-30 21:55:27.693804034 +0000 UTC m=+923.655051107" observedRunningTime="2026-01-30 21:55:28.480233756 +0000 UTC m=+924.441480829" watchObservedRunningTime="2026-01-30 21:55:28.485697904 +0000 UTC m=+924.446944977" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.040344 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.040884 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.440289 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqwmx"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.441536 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.451794 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-56xgf" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.469719 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqwmx"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.526014 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.527026 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.529312 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.529415 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2xs54"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.529921 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.530266 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89btw\" (UniqueName: \"kubernetes.io/projected/f03646b0-8776-45cc-9594-a0266af57be5-kube-api-access-89btw\") pod \"nmstate-metrics-54757c584b-nqwmx\" (UID: \"f03646b0-8776-45cc-9594-a0266af57be5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.561902 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631731 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-dbus-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89btw\" (UniqueName: \"kubernetes.io/projected/f03646b0-8776-45cc-9594-a0266af57be5-kube-api-access-89btw\") pod \"nmstate-metrics-54757c584b-nqwmx\" (UID: \"f03646b0-8776-45cc-9594-a0266af57be5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631834 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9dm\" (UniqueName: \"kubernetes.io/projected/63bf7e31-b607-4b21-9753-eb05a7bfb987-kube-api-access-5n9dm\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631871 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt4j\" (UniqueName: \"kubernetes.io/projected/2bf07cc3-611c-44b3-9fd0-831f5b718f11-kube-api-access-zrt4j\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63bf7e31-b607-4b21-9753-eb05a7bfb987-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631915 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-ovs-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-nmstate-lock\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.665118 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.666219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.668317 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.669184 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.669345 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2pmmn" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.671586 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89btw\" (UniqueName: \"kubernetes.io/projected/f03646b0-8776-45cc-9594-a0266af57be5-kube-api-access-89btw\") pod \"nmstate-metrics-54757c584b-nqwmx\" (UID: \"f03646b0-8776-45cc-9594-a0266af57be5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.704860 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736608 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63bf7e31-b607-4b21-9753-eb05a7bfb987-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736684 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-ovs-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736719 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-nmstate-lock\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736753 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-dbus-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbl6s\" (UniqueName: \"kubernetes.io/projected/4e67f5da-565e-4850-ac22-136965b5e12d-kube-api-access-xbl6s\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-ovs-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736829 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-nmstate-lock\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736842 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9dm\" (UniqueName: \"kubernetes.io/projected/63bf7e31-b607-4b21-9753-eb05a7bfb987-kube-api-access-5n9dm\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67f5da-565e-4850-ac22-136965b5e12d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.737039 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrt4j\" (UniqueName: \"kubernetes.io/projected/2bf07cc3-611c-44b3-9fd0-831f5b718f11-kube-api-access-zrt4j\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.737121 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e67f5da-565e-4850-ac22-136965b5e12d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.737148 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-dbus-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.753971 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9dm\" (UniqueName: \"kubernetes.io/projected/63bf7e31-b607-4b21-9753-eb05a7bfb987-kube-api-access-5n9dm\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.755652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63bf7e31-b607-4b21-9753-eb05a7bfb987-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.764992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrt4j\" (UniqueName: \"kubernetes.io/projected/2bf07cc3-611c-44b3-9fd0-831f5b718f11-kube-api-access-zrt4j\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.765446 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.838148 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67f5da-565e-4850-ac22-136965b5e12d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.838240 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e67f5da-565e-4850-ac22-136965b5e12d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.838328 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbl6s\" (UniqueName: \"kubernetes.io/projected/4e67f5da-565e-4850-ac22-136965b5e12d-kube-api-access-xbl6s\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.840461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e67f5da-565e-4850-ac22-136965b5e12d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.848161 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.848709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67f5da-565e-4850-ac22-136965b5e12d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.857476 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbl6s\" (UniqueName: \"kubernetes.io/projected/4e67f5da-565e-4850-ac22-136965b5e12d-kube-api-access-xbl6s\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.866570 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.883129 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fcb4db5f4-754dt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.883968 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: W0130 21:55:32.897083 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bf07cc3_611c_44b3_9fd0_831f5b718f11.slice/crio-20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3 WatchSource:0}: Error finding container 20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3: Status 404 returned error can't find the container with id 20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3 Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.898872 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fcb4db5f4-754dt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.940245 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-service-ca\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.940891 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-trusted-ca-bundle\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941014 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-console-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941073 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-oauth-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941538 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmld\" (UniqueName: \"kubernetes.io/projected/7afff541-d8aa-462f-b084-a80ff0e2729a-kube-api-access-cfmld\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941608 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-oauth-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.002582 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042503 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfmld\" (UniqueName: \"kubernetes.io/projected/7afff541-d8aa-462f-b084-a80ff0e2729a-kube-api-access-cfmld\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042577 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042610 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-oauth-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042636 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-service-ca\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.043844 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-service-ca\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-trusted-ca-bundle\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.043913 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-console-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.043947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-oauth-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.044599 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-oauth-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.045194 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-console-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.048824 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-trusted-ca-bundle\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.050865 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.052633 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-oauth-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.064737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfmld\" (UniqueName: \"kubernetes.io/projected/7afff541-d8aa-462f-b084-a80ff0e2729a-kube-api-access-cfmld\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.102316 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqwmx"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.113010 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03646b0_8776_45cc_9594_a0266af57be5.slice/crio-8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44 WatchSource:0}: Error finding container 8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44: Status 404 returned error can't find the container with id 8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.160817 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.163828 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63bf7e31_b607_4b21_9753_eb05a7bfb987.slice/crio-797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98 WatchSource:0}: Error finding container 797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98: Status 404 returned error can't find the container with id 797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.209761 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.246596 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.261540 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e67f5da_565e_4850_ac22_136965b5e12d.slice/crio-98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5 WatchSource:0}: Error finding container 98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5: Status 404 returned error can't find the container with id 98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.422329 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fcb4db5f4-754dt"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.430189 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7afff541_d8aa_462f_b084_a80ff0e2729a.slice/crio-a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67 WatchSource:0}: Error finding container a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67: Status 404 returned error can't find the container with id a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.509213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fcb4db5f4-754dt" event={"ID":"7afff541-d8aa-462f-b084-a80ff0e2729a","Type":"ContainerStarted","Data":"a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.510524 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" event={"ID":"63bf7e31-b607-4b21-9753-eb05a7bfb987","Type":"ContainerStarted","Data":"797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.511581 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" event={"ID":"f03646b0-8776-45cc-9594-a0266af57be5","Type":"ContainerStarted","Data":"8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.513054 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xs54" event={"ID":"2bf07cc3-611c-44b3-9fd0-831f5b718f11","Type":"ContainerStarted","Data":"20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.514206 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" event={"ID":"4e67f5da-565e-4850-ac22-136965b5e12d","Type":"ContainerStarted","Data":"98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5"} Jan 30 21:55:34 crc kubenswrapper[4979]: I0130 21:55:34.523496 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fcb4db5f4-754dt" event={"ID":"7afff541-d8aa-462f-b084-a80ff0e2729a","Type":"ContainerStarted","Data":"8b8cd7510018dc6cee8c7141f201dc88ce9b9c6adabeb22011cdcf928c6a0a0d"} Jan 30 21:55:34 crc kubenswrapper[4979]: I0130 21:55:34.543531 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fcb4db5f4-754dt" podStartSLOduration=2.543504294 podStartE2EDuration="2.543504294s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:55:34.54114563 +0000 UTC m=+930.502392683" watchObservedRunningTime="2026-01-30 21:55:34.543504294 +0000 UTC m=+930.504751327" Jan 30 21:55:35 crc kubenswrapper[4979]: I0130 21:55:35.007538 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:35 crc kubenswrapper[4979]: I0130 21:55:35.050178 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:35 crc kubenswrapper[4979]: I0130 21:55:35.245513 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.551150 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" event={"ID":"f03646b0-8776-45cc-9594-a0266af57be5","Type":"ContainerStarted","Data":"1be464082ec264ba4485332f2f612c00b20c867850f4ace43f8c1b286b7d62b0"} Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.554631 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.557324 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" event={"ID":"4e67f5da-565e-4850-ac22-136965b5e12d","Type":"ContainerStarted","Data":"2b82c4f4be2eb38ccaa379a4e0ec585c8e20bae3ce80beade4ed93f9c0d714a7"} Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.560062 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fhr5r" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" containerID="cri-o://3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" gracePeriod=2 Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.560686 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" event={"ID":"63bf7e31-b607-4b21-9753-eb05a7bfb987","Type":"ContainerStarted","Data":"96a2fb0ce51d7d5ca9c15bc7ec31b57f4881e3e86dc1a6042725b8dc07c14654"} Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.565672 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.580715 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2xs54" podStartSLOduration=1.282589291 podStartE2EDuration="4.580687225s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:32.906075091 +0000 UTC m=+928.867322124" lastFinishedPulling="2026-01-30 21:55:36.204172985 +0000 UTC m=+932.165420058" observedRunningTime="2026-01-30 21:55:36.578020593 +0000 UTC m=+932.539267626" watchObservedRunningTime="2026-01-30 21:55:36.580687225 +0000 UTC m=+932.541934258" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.599303 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" podStartSLOduration=1.667536389 podStartE2EDuration="4.599277367s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:33.264675227 +0000 UTC m=+929.225922260" lastFinishedPulling="2026-01-30 21:55:36.196416205 +0000 UTC m=+932.157663238" observedRunningTime="2026-01-30 21:55:36.597805278 +0000 UTC m=+932.559052311" watchObservedRunningTime="2026-01-30 21:55:36.599277367 +0000 UTC m=+932.560524400" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.672552 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" podStartSLOduration=1.634266437 podStartE2EDuration="4.672534265s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:33.165999349 +0000 UTC m=+929.127246382" lastFinishedPulling="2026-01-30 21:55:36.204267167 +0000 UTC m=+932.165514210" observedRunningTime="2026-01-30 21:55:36.669632107 +0000 UTC m=+932.630879140" watchObservedRunningTime="2026-01-30 21:55:36.672534265 +0000 UTC m=+932.633781298" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.892149 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.904263 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"673080e1-83e2-49f1-9c9a-713fb9367bea\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.904342 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"673080e1-83e2-49f1-9c9a-713fb9367bea\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.904374 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"673080e1-83e2-49f1-9c9a-713fb9367bea\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.905449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities" (OuterVolumeSpecName: "utilities") pod "673080e1-83e2-49f1-9c9a-713fb9367bea" (UID: "673080e1-83e2-49f1-9c9a-713fb9367bea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.911250 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn" (OuterVolumeSpecName: "kube-api-access-nqdjn") pod "673080e1-83e2-49f1-9c9a-713fb9367bea" (UID: "673080e1-83e2-49f1-9c9a-713fb9367bea"). InnerVolumeSpecName "kube-api-access-nqdjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.006089 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.006122 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.023778 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "673080e1-83e2-49f1-9c9a-713fb9367bea" (UID: "673080e1-83e2-49f1-9c9a-713fb9367bea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.107226 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.572879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xs54" event={"ID":"2bf07cc3-611c-44b3-9fd0-831f5b718f11","Type":"ContainerStarted","Data":"e0900ace803ae06e5ce574c7da1537cc845a405ccd01943677907a19b83308de"} Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577005 4979 generic.go:334] "Generic (PLEG): container finished" podID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" exitCode=0 Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577167 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577095 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742"} Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23"} Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577407 4979 scope.go:117] "RemoveContainer" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.604646 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.613493 4979 scope.go:117] "RemoveContainer" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.619332 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.646621 4979 scope.go:117] "RemoveContainer" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.667109 4979 scope.go:117] "RemoveContainer" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" Jan 30 21:55:37 crc kubenswrapper[4979]: E0130 21:55:37.667677 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742\": container with ID starting with 3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742 not found: ID does not exist" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.667751 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742"} err="failed to get container status \"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742\": rpc error: code = NotFound desc = could not find container \"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742\": container with ID starting with 3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742 not found: ID does not exist" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.667805 4979 scope.go:117] "RemoveContainer" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" Jan 30 21:55:37 crc kubenswrapper[4979]: E0130 21:55:37.668350 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60\": container with ID starting with 3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60 not found: ID does not exist" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.668412 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60"} err="failed to get container status \"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60\": rpc error: code = NotFound desc = could not find container \"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60\": container with ID starting with 3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60 not found: ID does not exist" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.668478 4979 scope.go:117] "RemoveContainer" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" Jan 30 21:55:37 crc kubenswrapper[4979]: E0130 21:55:37.668891 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988\": container with ID starting with c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988 not found: ID does not exist" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.668939 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988"} err="failed to get container status \"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988\": rpc error: code = NotFound desc = could not find container \"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988\": container with ID starting with c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988 not found: ID does not exist" Jan 30 21:55:39 crc kubenswrapper[4979]: I0130 21:55:39.080272 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" path="/var/lib/kubelet/pods/673080e1-83e2-49f1-9c9a-713fb9367bea/volumes" Jan 30 21:55:39 crc kubenswrapper[4979]: I0130 21:55:39.598821 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" event={"ID":"f03646b0-8776-45cc-9594-a0266af57be5","Type":"ContainerStarted","Data":"eb4cb0a8dff540b8fffe7f7f9ab6fc9f60dd806a7846372c1196b89789e74b15"} Jan 30 21:55:39 crc kubenswrapper[4979]: I0130 21:55:39.628872 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" podStartSLOduration=2.0151886 podStartE2EDuration="7.62884099s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:33.116410949 +0000 UTC m=+929.077657982" lastFinishedPulling="2026-01-30 21:55:38.730063349 +0000 UTC m=+934.691310372" observedRunningTime="2026-01-30 21:55:39.622677583 +0000 UTC m=+935.583924646" watchObservedRunningTime="2026-01-30 21:55:39.62884099 +0000 UTC m=+935.590088033" Jan 30 21:55:42 crc kubenswrapper[4979]: I0130 21:55:42.905122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.210066 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.210340 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.217298 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.635025 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.715598 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:55:52 crc kubenswrapper[4979]: I0130 21:55:52.859196 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.413715 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:55:53 crc kubenswrapper[4979]: E0130 21:55:53.414422 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414438 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" Jan 30 21:55:53 crc kubenswrapper[4979]: E0130 21:55:53.414449 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-content" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414457 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-content" Jan 30 21:55:53 crc kubenswrapper[4979]: E0130 21:55:53.414468 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-utilities" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414480 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-utilities" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414606 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.415569 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.442213 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.583758 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.583877 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.584180 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.685622 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.685714 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.685807 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.686420 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.686529 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.733880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.740718 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.274605 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.723266 4979 generic.go:334] "Generic (PLEG): container finished" podID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" exitCode=0 Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.723331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53"} Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.723370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerStarted","Data":"89992e57bfeaf566ee1898ace499fb3c14b6f08c56f4e7414987da47cef73f72"} Jan 30 21:55:56 crc kubenswrapper[4979]: I0130 21:55:56.745951 4979 generic.go:334] "Generic (PLEG): container finished" podID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" exitCode=0 Jan 30 21:55:56 crc kubenswrapper[4979]: I0130 21:55:56.746068 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1"} Jan 30 21:55:57 crc kubenswrapper[4979]: I0130 21:55:57.760105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerStarted","Data":"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855"} Jan 30 21:55:57 crc kubenswrapper[4979]: I0130 21:55:57.784431 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2ltp" podStartSLOduration=2.369773416 podStartE2EDuration="4.784404863s" podCreationTimestamp="2026-01-30 21:55:53 +0000 UTC" firstStartedPulling="2026-01-30 21:55:54.726133734 +0000 UTC m=+950.687380767" lastFinishedPulling="2026-01-30 21:55:57.140765161 +0000 UTC m=+953.102012214" observedRunningTime="2026-01-30 21:55:57.782376668 +0000 UTC m=+953.743623701" watchObservedRunningTime="2026-01-30 21:55:57.784404863 +0000 UTC m=+953.745651886" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.039889 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.040979 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.041077 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.041988 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.042072 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28" gracePeriod=600 Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.801995 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28" exitCode=0 Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.802065 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28"} Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.802970 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff"} Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.803002 4979 scope.go:117] "RemoveContainer" containerID="bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.742502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.743572 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.815976 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.874198 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:04 crc kubenswrapper[4979]: I0130 21:56:04.062633 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:56:05 crc kubenswrapper[4979]: I0130 21:56:05.837980 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2ltp" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" containerID="cri-o://f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" gracePeriod=2 Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.267869 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.417193 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"c204f004-5a44-4602-9a51-b1364cd9e46f\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.417349 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"c204f004-5a44-4602-9a51-b1364cd9e46f\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.417471 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"c204f004-5a44-4602-9a51-b1364cd9e46f\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.419260 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities" (OuterVolumeSpecName: "utilities") pod "c204f004-5a44-4602-9a51-b1364cd9e46f" (UID: "c204f004-5a44-4602-9a51-b1364cd9e46f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.425571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5" (OuterVolumeSpecName: "kube-api-access-kfnm5") pod "c204f004-5a44-4602-9a51-b1364cd9e46f" (UID: "c204f004-5a44-4602-9a51-b1364cd9e46f"). InnerVolumeSpecName "kube-api-access-kfnm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.484835 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c204f004-5a44-4602-9a51-b1364cd9e46f" (UID: "c204f004-5a44-4602-9a51-b1364cd9e46f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.519019 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.519355 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.519524 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.847934 4979 generic.go:334] "Generic (PLEG): container finished" podID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" exitCode=0 Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.848563 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.848499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855"} Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.850656 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"89992e57bfeaf566ee1898ace499fb3c14b6f08c56f4e7414987da47cef73f72"} Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.850846 4979 scope.go:117] "RemoveContainer" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.895260 4979 scope.go:117] "RemoveContainer" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.904635 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.908845 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.925294 4979 scope.go:117] "RemoveContainer" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.949299 4979 scope.go:117] "RemoveContainer" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" Jan 30 21:56:06 crc kubenswrapper[4979]: E0130 21:56:06.950098 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855\": container with ID starting with f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855 not found: ID does not exist" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.950231 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855"} err="failed to get container status \"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855\": rpc error: code = NotFound desc = could not find container \"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855\": container with ID starting with f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855 not found: ID does not exist" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.950328 4979 scope.go:117] "RemoveContainer" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" Jan 30 21:56:06 crc kubenswrapper[4979]: E0130 21:56:06.950919 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1\": container with ID starting with b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1 not found: ID does not exist" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.951004 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1"} err="failed to get container status \"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1\": rpc error: code = NotFound desc = could not find container \"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1\": container with ID starting with b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1 not found: ID does not exist" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.951088 4979 scope.go:117] "RemoveContainer" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" Jan 30 21:56:06 crc kubenswrapper[4979]: E0130 21:56:06.951538 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53\": container with ID starting with fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53 not found: ID does not exist" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.951618 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53"} err="failed to get container status \"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53\": rpc error: code = NotFound desc = could not find container \"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53\": container with ID starting with fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53 not found: ID does not exist" Jan 30 21:56:07 crc kubenswrapper[4979]: I0130 21:56:07.083895 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" path="/var/lib/kubelet/pods/c204f004-5a44-4602-9a51-b1364cd9e46f/volumes" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.117152 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr"] Jan 30 21:56:08 crc kubenswrapper[4979]: E0130 21:56:08.118920 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-content" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.118986 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-content" Jan 30 21:56:08 crc kubenswrapper[4979]: E0130 21:56:08.119059 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-utilities" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.119125 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-utilities" Jan 30 21:56:08 crc kubenswrapper[4979]: E0130 21:56:08.119180 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.119257 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.119407 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.120298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.123625 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.135692 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr"] Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.172214 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.172293 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.172595 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274321 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.275074 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.299422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.435349 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.667566 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr"] Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.792324 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-h6sv5" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" containerID="cri-o://37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" gracePeriod=15 Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.862189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerStarted","Data":"fa202ddbc836e952b85b3227e8f9d2ef0dfbc5d3f331b0b87d6066b738c774c0"} Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.886826 4979 patch_prober.go:28] interesting pod/console-f9d7485db-h6sv5 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.887734 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-h6sv5" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.147850 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h6sv5_cc25d794-4ead-4436-a026-179f655c13d4/console/0.log" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.147932 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.286899 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287012 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287059 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287119 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca" (OuterVolumeSpecName: "service-ca") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287994 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config" (OuterVolumeSpecName: "console-config") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288280 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287229 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288384 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288438 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288767 4979 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288786 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288799 4979 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.289516 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.295559 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.295627 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47" (OuterVolumeSpecName: "kube-api-access-bqg47") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "kube-api-access-bqg47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.295851 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390154 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390213 4979 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390229 4979 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390241 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.871014 4979 generic.go:334] "Generic (PLEG): container finished" podID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerID="8964675dcc3f2890a07af98a3b878fffe8f0f13a5c075275dcf5b2e35d16b550" exitCode=0 Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.871125 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"8964675dcc3f2890a07af98a3b878fffe8f0f13a5c075275dcf5b2e35d16b550"} Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.873735 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h6sv5_cc25d794-4ead-4436-a026-179f655c13d4/console/0.log" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.873890 4979 generic.go:334] "Generic (PLEG): container finished" podID="cc25d794-4ead-4436-a026-179f655c13d4" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" exitCode=2 Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.873955 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerDied","Data":"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322"} Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.874006 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerDied","Data":"964c8b1ba5415a6ffab5411d004a571cd2b1dc55669379c6f25606fce00667e5"} Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.874068 4979 scope.go:117] "RemoveContainer" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.874298 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.904817 4979 scope.go:117] "RemoveContainer" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" Jan 30 21:56:09 crc kubenswrapper[4979]: E0130 21:56:09.905430 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322\": container with ID starting with 37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322 not found: ID does not exist" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.905482 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322"} err="failed to get container status \"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322\": rpc error: code = NotFound desc = could not find container \"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322\": container with ID starting with 37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322 not found: ID does not exist" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.928246 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.931624 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:56:11 crc kubenswrapper[4979]: I0130 21:56:11.080286 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc25d794-4ead-4436-a026-179f655c13d4" path="/var/lib/kubelet/pods/cc25d794-4ead-4436-a026-179f655c13d4/volumes" Jan 30 21:56:11 crc kubenswrapper[4979]: I0130 21:56:11.895322 4979 generic.go:334] "Generic (PLEG): container finished" podID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerID="ee4556dc4a0b4ab431233fc4bfd44f5fc7311a133dbffc20bdc184a0cc538ac8" exitCode=0 Jan 30 21:56:11 crc kubenswrapper[4979]: I0130 21:56:11.895416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"ee4556dc4a0b4ab431233fc4bfd44f5fc7311a133dbffc20bdc184a0cc538ac8"} Jan 30 21:56:12 crc kubenswrapper[4979]: I0130 21:56:12.906551 4979 generic.go:334] "Generic (PLEG): container finished" podID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerID="80c9599fd060a9e7859794828d9e70447ffbf0c97210fa24ffebe82f93ce1f27" exitCode=0 Jan 30 21:56:12 crc kubenswrapper[4979]: I0130 21:56:12.906656 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"80c9599fd060a9e7859794828d9e70447ffbf0c97210fa24ffebe82f93ce1f27"} Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.232515 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.362385 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.362507 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.362814 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.364693 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle" (OuterVolumeSpecName: "bundle") pod "3a16a524-cbae-4652-8fbd-e0b2430ec7d5" (UID: "3a16a524-cbae-4652-8fbd-e0b2430ec7d5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.373939 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25" (OuterVolumeSpecName: "kube-api-access-jnf25") pod "3a16a524-cbae-4652-8fbd-e0b2430ec7d5" (UID: "3a16a524-cbae-4652-8fbd-e0b2430ec7d5"). InnerVolumeSpecName "kube-api-access-jnf25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.457203 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util" (OuterVolumeSpecName: "util") pod "3a16a524-cbae-4652-8fbd-e0b2430ec7d5" (UID: "3a16a524-cbae-4652-8fbd-e0b2430ec7d5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.466277 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.466363 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.466450 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.927961 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"fa202ddbc836e952b85b3227e8f9d2ef0dfbc5d3f331b0b87d6066b738c774c0"} Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.928029 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa202ddbc836e952b85b3227e8f9d2ef0dfbc5d3f331b0b87d6066b738c774c0" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.928138 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.461371 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s"] Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462404 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="pull" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462419 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="pull" Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462429 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="util" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462435 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="util" Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462447 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462454 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462469 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="extract" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462475 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="extract" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462594 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462604 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="extract" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.463090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.465474 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.465815 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.466111 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.466449 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6rprj" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.468315 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.482978 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s"] Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.636693 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-apiservice-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.636787 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbj25\" (UniqueName: \"kubernetes.io/projected/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-kube-api-access-wbj25\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.636868 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-webhook-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.728417 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2"] Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.729408 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.734268 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dgwfz" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.734814 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.734942 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.738071 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-webhook-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.738151 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-apiservice-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.738191 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbj25\" (UniqueName: \"kubernetes.io/projected/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-kube-api-access-wbj25\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.747202 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-webhook-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.747600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-apiservice-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.758354 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2"] Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.768001 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbj25\" (UniqueName: \"kubernetes.io/projected/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-kube-api-access-wbj25\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.781698 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.839537 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-webhook-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.839613 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-apiservice-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.840166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvld\" (UniqueName: \"kubernetes.io/projected/04d21772-3311-4f78-a621-a66fa5d1cb7d-kube-api-access-hqvld\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.942335 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-webhook-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.942857 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-apiservice-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.942905 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvld\" (UniqueName: \"kubernetes.io/projected/04d21772-3311-4f78-a621-a66fa5d1cb7d-kube-api-access-hqvld\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.950732 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-apiservice-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.951875 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-webhook-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.972658 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvld\" (UniqueName: \"kubernetes.io/projected/04d21772-3311-4f78-a621-a66fa5d1cb7d-kube-api-access-hqvld\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:25 crc kubenswrapper[4979]: I0130 21:56:25.052212 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:25 crc kubenswrapper[4979]: I0130 21:56:25.255750 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s"] Jan 30 21:56:25 crc kubenswrapper[4979]: I0130 21:56:25.320314 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2"] Jan 30 21:56:25 crc kubenswrapper[4979]: W0130 21:56:25.328154 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04d21772_3311_4f78_a621_a66fa5d1cb7d.slice/crio-ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a WatchSource:0}: Error finding container ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a: Status 404 returned error can't find the container with id ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a Jan 30 21:56:26 crc kubenswrapper[4979]: I0130 21:56:26.008449 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" event={"ID":"04d21772-3311-4f78-a621-a66fa5d1cb7d","Type":"ContainerStarted","Data":"ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a"} Jan 30 21:56:26 crc kubenswrapper[4979]: I0130 21:56:26.011495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" event={"ID":"30c6b9df-d3aa-4a9a-807e-93d8b11c9159","Type":"ContainerStarted","Data":"df21423e823fc936c7379b471ffa423360c5676ae5d0ba9918eaa461fc10bfd7"} Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.706520 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.710294 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.711987 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.728664 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.728739 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.728802 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830139 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830196 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830230 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.863457 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.038158 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.046586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" event={"ID":"30c6b9df-d3aa-4a9a-807e-93d8b11c9159","Type":"ContainerStarted","Data":"ae2f7c4d4eaab24befe0ea9c34c811f6ddb50f955e1cf876a5d47dc1ff694d9d"} Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.047422 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.092913 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" podStartSLOduration=1.583433348 podStartE2EDuration="6.092896972s" podCreationTimestamp="2026-01-30 21:56:24 +0000 UTC" firstStartedPulling="2026-01-30 21:56:25.27322039 +0000 UTC m=+981.234467423" lastFinishedPulling="2026-01-30 21:56:29.782684014 +0000 UTC m=+985.743931047" observedRunningTime="2026-01-30 21:56:30.087810204 +0000 UTC m=+986.049057237" watchObservedRunningTime="2026-01-30 21:56:30.092896972 +0000 UTC m=+986.054144005" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.332194 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.055462 4979 generic.go:334] "Generic (PLEG): container finished" podID="822b342e-14fa-4653-8217-bea9a32e90aa" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" exitCode=0 Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.055582 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637"} Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.055644 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerStarted","Data":"fd8ae73f685afdd6826a3e84a0be1355988be291a6aa0ddd82b95f6ef976bfc3"} Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.057051 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" event={"ID":"04d21772-3311-4f78-a621-a66fa5d1cb7d","Type":"ContainerStarted","Data":"a52b3ae95f0623fdc0546910b27ef1aaae547f433daf48c54b8592b2ff3d29f6"} Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.057202 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.110991 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" podStartSLOduration=2.655210245 podStartE2EDuration="7.110965929s" podCreationTimestamp="2026-01-30 21:56:24 +0000 UTC" firstStartedPulling="2026-01-30 21:56:25.331648819 +0000 UTC m=+981.292895852" lastFinishedPulling="2026-01-30 21:56:29.787404503 +0000 UTC m=+985.748651536" observedRunningTime="2026-01-30 21:56:31.107738922 +0000 UTC m=+987.068985965" watchObservedRunningTime="2026-01-30 21:56:31.110965929 +0000 UTC m=+987.072212972" Jan 30 21:56:32 crc kubenswrapper[4979]: I0130 21:56:32.066295 4979 generic.go:334] "Generic (PLEG): container finished" podID="822b342e-14fa-4653-8217-bea9a32e90aa" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" exitCode=0 Jan 30 21:56:32 crc kubenswrapper[4979]: I0130 21:56:32.066401 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377"} Jan 30 21:56:33 crc kubenswrapper[4979]: I0130 21:56:33.080077 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerStarted","Data":"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268"} Jan 30 21:56:33 crc kubenswrapper[4979]: I0130 21:56:33.102311 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ns9mx" podStartSLOduration=2.567109361 podStartE2EDuration="4.102288129s" podCreationTimestamp="2026-01-30 21:56:29 +0000 UTC" firstStartedPulling="2026-01-30 21:56:31.05813311 +0000 UTC m=+987.019380143" lastFinishedPulling="2026-01-30 21:56:32.593311878 +0000 UTC m=+988.554558911" observedRunningTime="2026-01-30 21:56:33.09678875 +0000 UTC m=+989.058035783" watchObservedRunningTime="2026-01-30 21:56:33.102288129 +0000 UTC m=+989.063535162" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.039795 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.040797 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.111376 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.231546 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:41 crc kubenswrapper[4979]: I0130 21:56:41.249797 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:42 crc kubenswrapper[4979]: I0130 21:56:42.129433 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ns9mx" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" containerID="cri-o://5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" gracePeriod=2 Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.093116 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145186 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"822b342e-14fa-4653-8217-bea9a32e90aa\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145263 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"822b342e-14fa-4653-8217-bea9a32e90aa\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145361 4979 generic.go:334] "Generic (PLEG): container finished" podID="822b342e-14fa-4653-8217-bea9a32e90aa" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" exitCode=0 Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145412 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"822b342e-14fa-4653-8217-bea9a32e90aa\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145420 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268"} Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"fd8ae73f685afdd6826a3e84a0be1355988be291a6aa0ddd82b95f6ef976bfc3"} Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145476 4979 scope.go:117] "RemoveContainer" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145620 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.148719 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities" (OuterVolumeSpecName: "utilities") pod "822b342e-14fa-4653-8217-bea9a32e90aa" (UID: "822b342e-14fa-4653-8217-bea9a32e90aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.159492 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l" (OuterVolumeSpecName: "kube-api-access-7cz6l") pod "822b342e-14fa-4653-8217-bea9a32e90aa" (UID: "822b342e-14fa-4653-8217-bea9a32e90aa"). InnerVolumeSpecName "kube-api-access-7cz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.178806 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "822b342e-14fa-4653-8217-bea9a32e90aa" (UID: "822b342e-14fa-4653-8217-bea9a32e90aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.188523 4979 scope.go:117] "RemoveContainer" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.206142 4979 scope.go:117] "RemoveContainer" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.224331 4979 scope.go:117] "RemoveContainer" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" Jan 30 21:56:44 crc kubenswrapper[4979]: E0130 21:56:44.225083 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268\": container with ID starting with 5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268 not found: ID does not exist" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225150 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268"} err="failed to get container status \"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268\": rpc error: code = NotFound desc = could not find container \"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268\": container with ID starting with 5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268 not found: ID does not exist" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225188 4979 scope.go:117] "RemoveContainer" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" Jan 30 21:56:44 crc kubenswrapper[4979]: E0130 21:56:44.225659 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377\": container with ID starting with 698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377 not found: ID does not exist" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225720 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377"} err="failed to get container status \"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377\": rpc error: code = NotFound desc = could not find container \"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377\": container with ID starting with 698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377 not found: ID does not exist" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225769 4979 scope.go:117] "RemoveContainer" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" Jan 30 21:56:44 crc kubenswrapper[4979]: E0130 21:56:44.226201 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637\": container with ID starting with 859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637 not found: ID does not exist" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.226248 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637"} err="failed to get container status \"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637\": rpc error: code = NotFound desc = could not find container \"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637\": container with ID starting with 859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637 not found: ID does not exist" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.247793 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.247825 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.247835 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.487688 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.494225 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:45 crc kubenswrapper[4979]: I0130 21:56:45.058813 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:45 crc kubenswrapper[4979]: I0130 21:56:45.078739 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" path="/var/lib/kubelet/pods/822b342e-14fa-4653-8217-bea9a32e90aa/volumes" Jan 30 21:57:04 crc kubenswrapper[4979]: I0130 21:57:04.785675 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.427816 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cnk7l"] Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.428222 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-content" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428247 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-content" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.428261 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-utilities" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428272 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-utilities" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.428295 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428304 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428443 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.430891 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.434340 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.435602 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.436858 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ghkjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.436866 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.436894 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.441179 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.447811 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.542764 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v2nkx"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.557418 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.564692 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7qf65" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.564890 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.564974 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.565070 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.584958 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6whjn"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594661 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594710 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzf5\" (UniqueName: \"kubernetes.io/projected/edde5f2f-1d96-49c5-aee3-92f1b77ac088-kube-api-access-gxzf5\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594743 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-reloader\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-conf\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594815 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxndl\" (UniqueName: \"kubernetes.io/projected/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-kube-api-access-vxndl\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594836 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-startup\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594864 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-sockets\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594892 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.597947 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.602089 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.608620 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6whjn"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.696735 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-reloader\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-conf\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697328 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxndl\" (UniqueName: \"kubernetes.io/projected/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-kube-api-access-vxndl\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-startup\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697385 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-sockets\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697421 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697467 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-metrics-certs\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697486 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697509 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6a083acc-78e0-41df-84ad-70c965c7bb5a-metallb-excludel2\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697537 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6vf\" (UniqueName: \"kubernetes.io/projected/6a083acc-78e0-41df-84ad-70c965c7bb5a-kube-api-access-gr6vf\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697566 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzf5\" (UniqueName: \"kubernetes.io/projected/edde5f2f-1d96-49c5-aee3-92f1b77ac088-kube-api-access-gxzf5\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.698330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-reloader\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.698537 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-conf\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.699529 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-startup\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.699727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-sockets\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.699803 4979 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.699850 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs podName:edde5f2f-1d96-49c5-aee3-92f1b77ac088 nodeName:}" failed. No retries permitted until 2026-01-30 21:57:06.199836174 +0000 UTC m=+1022.161083207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs") pod "frr-k8s-cnk7l" (UID: "edde5f2f-1d96-49c5-aee3-92f1b77ac088") : secret "frr-k8s-certs-secret" not found Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.702419 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.708605 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.714951 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxndl\" (UniqueName: \"kubernetes.io/projected/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-kube-api-access-vxndl\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.716800 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzf5\" (UniqueName: \"kubernetes.io/projected/edde5f2f-1d96-49c5-aee3-92f1b77ac088-kube-api-access-gxzf5\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.761988 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.798979 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-metrics-certs\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799524 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799631 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglzk\" (UniqueName: \"kubernetes.io/projected/b9bf7d77-b99e-4190-8510-dd0778767e89-kube-api-access-gglzk\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799720 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-cert\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.799778 4979 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799814 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-metrics-certs\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.799889 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist podName:6a083acc-78e0-41df-84ad-70c965c7bb5a nodeName:}" failed. No retries permitted until 2026-01-30 21:57:06.299864509 +0000 UTC m=+1022.261111562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist") pod "speaker-v2nkx" (UID: "6a083acc-78e0-41df-84ad-70c965c7bb5a") : secret "metallb-memberlist" not found Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.800088 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6a083acc-78e0-41df-84ad-70c965c7bb5a-metallb-excludel2\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.800186 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6vf\" (UniqueName: \"kubernetes.io/projected/6a083acc-78e0-41df-84ad-70c965c7bb5a-kube-api-access-gr6vf\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.801271 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6a083acc-78e0-41df-84ad-70c965c7bb5a-metallb-excludel2\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.807499 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-metrics-certs\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.832653 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6vf\" (UniqueName: \"kubernetes.io/projected/6a083acc-78e0-41df-84ad-70c965c7bb5a-kube-api-access-gr6vf\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.901572 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-metrics-certs\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.901653 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gglzk\" (UniqueName: \"kubernetes.io/projected/b9bf7d77-b99e-4190-8510-dd0778767e89-kube-api-access-gglzk\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.901673 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-cert\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.903863 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.908935 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-metrics-certs\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.916592 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-cert\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.921693 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gglzk\" (UniqueName: \"kubernetes.io/projected/b9bf7d77-b99e-4190-8510-dd0778767e89-kube-api-access-gglzk\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.006012 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv"] Jan 30 21:57:06 crc kubenswrapper[4979]: W0130 21:57:06.015540 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8932bcf_8e7b_4302_a623_ece7abe7d2e2.slice/crio-a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26 WatchSource:0}: Error finding container a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26: Status 404 returned error can't find the container with id a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26 Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.205736 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.210991 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.212580 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.307256 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:06 crc kubenswrapper[4979]: E0130 21:57:06.307455 4979 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:57:06 crc kubenswrapper[4979]: E0130 21:57:06.307538 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist podName:6a083acc-78e0-41df-84ad-70c965c7bb5a nodeName:}" failed. No retries permitted until 2026-01-30 21:57:07.307506004 +0000 UTC m=+1023.268753037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist") pod "speaker-v2nkx" (UID: "6a083acc-78e0-41df-84ad-70c965c7bb5a") : secret "metallb-memberlist" not found Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.322215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" event={"ID":"f8932bcf-8e7b-4302-a623-ece7abe7d2e2","Type":"ContainerStarted","Data":"a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26"} Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.355753 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.468007 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6whjn"] Jan 30 21:57:06 crc kubenswrapper[4979]: W0130 21:57:06.473864 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9bf7d77_b99e_4190_8510_dd0778767e89.slice/crio-b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd WatchSource:0}: Error finding container b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd: Status 404 returned error can't find the container with id b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.329459 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.338596 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.339747 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6whjn" event={"ID":"b9bf7d77-b99e-4190-8510-dd0778767e89","Type":"ContainerStarted","Data":"4454f40e164e3694958bf43824ea8f0b8c1ae2c5a7b14bd6c7da74737dcb3f04"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.339810 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6whjn" event={"ID":"b9bf7d77-b99e-4190-8510-dd0778767e89","Type":"ContainerStarted","Data":"155f2bbedb9491c8cbb2c4cbcf3bd397c1b1690aaf92102597172b1c122130f4"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.339820 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6whjn" event={"ID":"b9bf7d77-b99e-4190-8510-dd0778767e89","Type":"ContainerStarted","Data":"b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.342635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.345413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"7c2fe528c2d4e854fc39b9fd99bf4818b764166e231d2934501aa4ac7b53af45"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.366573 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6whjn" podStartSLOduration=2.366548408 podStartE2EDuration="2.366548408s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:57:07.362339784 +0000 UTC m=+1023.323586817" watchObservedRunningTime="2026-01-30 21:57:07.366548408 +0000 UTC m=+1023.327795451" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.400157 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356193 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2nkx" event={"ID":"6a083acc-78e0-41df-84ad-70c965c7bb5a","Type":"ContainerStarted","Data":"df85e0c46af00c5ce0640e6f1460561c76e351c8e13d0b77eb3734b41ea564c9"} Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356576 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2nkx" event={"ID":"6a083acc-78e0-41df-84ad-70c965c7bb5a","Type":"ContainerStarted","Data":"21d07e8bbb90331a257489180141584d9ead46e30ca327b3d045f58880a95b80"} Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2nkx" event={"ID":"6a083acc-78e0-41df-84ad-70c965c7bb5a","Type":"ContainerStarted","Data":"26a16657fb9c20c67806b7ac0c2a3f99cd56c1621e8fea09789ab2cc81d08760"} Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356786 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.385009 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v2nkx" podStartSLOduration=3.384988395 podStartE2EDuration="3.384988395s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:57:08.378668513 +0000 UTC m=+1024.339915546" watchObservedRunningTime="2026-01-30 21:57:08.384988395 +0000 UTC m=+1024.346235428" Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.412264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" event={"ID":"f8932bcf-8e7b-4302-a623-ece7abe7d2e2","Type":"ContainerStarted","Data":"dc4d416b8a795eee7dd714f3c84a0c6fd65b1903b3994e5988437e0e150275d6"} Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.412751 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.415320 4979 generic.go:334] "Generic (PLEG): container finished" podID="edde5f2f-1d96-49c5-aee3-92f1b77ac088" containerID="6f8cb9f245fa1decb42f731396b25075acde89cd3bca790a35ae98f6a89131a1" exitCode=0 Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.415430 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerDied","Data":"6f8cb9f245fa1decb42f731396b25075acde89cd3bca790a35ae98f6a89131a1"} Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.438103 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" podStartSLOduration=1.8268818470000001 podStartE2EDuration="9.438072025s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="2026-01-30 21:57:06.020476714 +0000 UTC m=+1021.981723747" lastFinishedPulling="2026-01-30 21:57:13.631666892 +0000 UTC m=+1029.592913925" observedRunningTime="2026-01-30 21:57:14.435895466 +0000 UTC m=+1030.397142499" watchObservedRunningTime="2026-01-30 21:57:14.438072025 +0000 UTC m=+1030.399319078" Jan 30 21:57:15 crc kubenswrapper[4979]: I0130 21:57:15.427108 4979 generic.go:334] "Generic (PLEG): container finished" podID="edde5f2f-1d96-49c5-aee3-92f1b77ac088" containerID="0a519d46de9520be742b5bc7ecc3a73261f6fd3c37c0bf0192fb012534ad7751" exitCode=0 Jan 30 21:57:15 crc kubenswrapper[4979]: I0130 21:57:15.427259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerDied","Data":"0a519d46de9520be742b5bc7ecc3a73261f6fd3c37c0bf0192fb012534ad7751"} Jan 30 21:57:16 crc kubenswrapper[4979]: I0130 21:57:16.219665 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:16 crc kubenswrapper[4979]: I0130 21:57:16.443069 4979 generic.go:334] "Generic (PLEG): container finished" podID="edde5f2f-1d96-49c5-aee3-92f1b77ac088" containerID="687b230b79d478629a3b5a1a54d8209f287a8f2853fa3c15f4a243c9e5146e5f" exitCode=0 Jan 30 21:57:16 crc kubenswrapper[4979]: I0130 21:57:16.443134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerDied","Data":"687b230b79d478629a3b5a1a54d8209f287a8f2853fa3c15f4a243c9e5146e5f"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.405023 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461853 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"9eaa7c9e520b1f9d7e988eb0d6cc204c1bd1c2ce26e810a56853f6570cb7cbb6"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461915 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"483a74cb65027f36edaaf272987268eb1d37fce59cf8e9e4c064e17bfcb63baf"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461928 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"a9dc121ea455deb3798183ad37879331242e35a8c7b46ceaba90319fc7923bb4"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461938 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"3a69036622a19cf0bb571911030b8012537d593e7e6982e6ea575915373afb53"} Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.476009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"23f7fc2e859a205f6c8ed25abbbc1c10b75076b39d07c58e5042ebfcbdc3bdd9"} Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.476404 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.476418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"8fab229313ba23f445f06eab1c249599c7ea18286b2a404794be886563be4f1f"} Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.507498 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cnk7l" podStartSLOduration=6.343467275 podStartE2EDuration="13.507472442s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="2026-01-30 21:57:06.522153338 +0000 UTC m=+1022.483400371" lastFinishedPulling="2026-01-30 21:57:13.686158495 +0000 UTC m=+1029.647405538" observedRunningTime="2026-01-30 21:57:18.502756805 +0000 UTC m=+1034.464003848" watchObservedRunningTime="2026-01-30 21:57:18.507472442 +0000 UTC m=+1034.468719485" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.897187 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc"] Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.898628 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.910019 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.940149 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc"] Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.017229 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.017281 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.017351 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.118782 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.118879 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.118910 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.119427 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.119710 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.141353 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.224236 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.463205 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc"] Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.483494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerStarted","Data":"f3502a227fcac407c208d6a315f5323083b23b991e33b03812a555d3684eedbf"} Jan 30 21:57:20 crc kubenswrapper[4979]: I0130 21:57:20.492308 4979 generic.go:334] "Generic (PLEG): container finished" podID="20b0495c-9015-4cd9-9381-096926c32623" containerID="14bc5dd843028f44fba21e25f302e0081d7ede254e083e89946c5ea930a2ec7c" exitCode=0 Jan 30 21:57:20 crc kubenswrapper[4979]: I0130 21:57:20.492384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"14bc5dd843028f44fba21e25f302e0081d7ede254e083e89946c5ea930a2ec7c"} Jan 30 21:57:21 crc kubenswrapper[4979]: I0130 21:57:21.357263 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:21 crc kubenswrapper[4979]: I0130 21:57:21.486678 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:24 crc kubenswrapper[4979]: I0130 21:57:24.526217 4979 generic.go:334] "Generic (PLEG): container finished" podID="20b0495c-9015-4cd9-9381-096926c32623" containerID="f7a6336aa36e6067c52252cc9875022a5ce758e3ba0fc5ce20b405a98bd3f083" exitCode=0 Jan 30 21:57:24 crc kubenswrapper[4979]: I0130 21:57:24.526314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"f7a6336aa36e6067c52252cc9875022a5ce758e3ba0fc5ce20b405a98bd3f083"} Jan 30 21:57:25 crc kubenswrapper[4979]: I0130 21:57:25.536061 4979 generic.go:334] "Generic (PLEG): container finished" podID="20b0495c-9015-4cd9-9381-096926c32623" containerID="d50cfa0598b2c9a6a51f299020c259319cd700ac02cf742802fc0ec6d47d05b2" exitCode=0 Jan 30 21:57:25 crc kubenswrapper[4979]: I0130 21:57:25.536143 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"d50cfa0598b2c9a6a51f299020c259319cd700ac02cf742802fc0ec6d47d05b2"} Jan 30 21:57:25 crc kubenswrapper[4979]: I0130 21:57:25.767696 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.359878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.827079 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.956754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"20b0495c-9015-4cd9-9381-096926c32623\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.956955 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"20b0495c-9015-4cd9-9381-096926c32623\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.956989 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"20b0495c-9015-4cd9-9381-096926c32623\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.958491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle" (OuterVolumeSpecName: "bundle") pod "20b0495c-9015-4cd9-9381-096926c32623" (UID: "20b0495c-9015-4cd9-9381-096926c32623"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.964167 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb" (OuterVolumeSpecName: "kube-api-access-jr6kb") pod "20b0495c-9015-4cd9-9381-096926c32623" (UID: "20b0495c-9015-4cd9-9381-096926c32623"). InnerVolumeSpecName "kube-api-access-jr6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.966969 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util" (OuterVolumeSpecName: "util") pod "20b0495c-9015-4cd9-9381-096926c32623" (UID: "20b0495c-9015-4cd9-9381-096926c32623"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.058752 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.058790 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.058800 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.551827 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"f3502a227fcac407c208d6a315f5323083b23b991e33b03812a555d3684eedbf"} Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.551900 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3502a227fcac407c208d6a315f5323083b23b991e33b03812a555d3684eedbf" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.551917 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.148843 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd"] Jan 30 21:57:32 crc kubenswrapper[4979]: E0130 21:57:32.150056 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="extract" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150074 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="extract" Jan 30 21:57:32 crc kubenswrapper[4979]: E0130 21:57:32.150118 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="util" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150127 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="util" Jan 30 21:57:32 crc kubenswrapper[4979]: E0130 21:57:32.150138 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="pull" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150147 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="pull" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150299 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="extract" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.151006 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.161211 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8vxj7" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.161293 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.168858 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.183748 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd"] Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.238920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0330af3-c305-40ae-b65b-dbf13ed2c345-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.239015 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6zz4\" (UniqueName: \"kubernetes.io/projected/b0330af3-c305-40ae-b65b-dbf13ed2c345-kube-api-access-k6zz4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.340524 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0330af3-c305-40ae-b65b-dbf13ed2c345-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.340620 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6zz4\" (UniqueName: \"kubernetes.io/projected/b0330af3-c305-40ae-b65b-dbf13ed2c345-kube-api-access-k6zz4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.341701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0330af3-c305-40ae-b65b-dbf13ed2c345-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.365273 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6zz4\" (UniqueName: \"kubernetes.io/projected/b0330af3-c305-40ae-b65b-dbf13ed2c345-kube-api-access-k6zz4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.474332 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.761468 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd"] Jan 30 21:57:33 crc kubenswrapper[4979]: I0130 21:57:33.597385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" event={"ID":"b0330af3-c305-40ae-b65b-dbf13ed2c345","Type":"ContainerStarted","Data":"4048f6a6adbfa55d9327c1211a08a7e3e97b3814558fd5629016a32f12d0b1e8"} Jan 30 21:57:36 crc kubenswrapper[4979]: I0130 21:57:36.621189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" event={"ID":"b0330af3-c305-40ae-b65b-dbf13ed2c345","Type":"ContainerStarted","Data":"7b9e7e27ecd2927fc503093b88639d1f6677232f2406ebc93fa4cc6468bf61ca"} Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.025311 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" podStartSLOduration=5.65357256 podStartE2EDuration="9.025289744s" podCreationTimestamp="2026-01-30 21:57:32 +0000 UTC" firstStartedPulling="2026-01-30 21:57:32.837452863 +0000 UTC m=+1048.798699896" lastFinishedPulling="2026-01-30 21:57:36.209170047 +0000 UTC m=+1052.170417080" observedRunningTime="2026-01-30 21:57:36.650942941 +0000 UTC m=+1052.612189994" watchObservedRunningTime="2026-01-30 21:57:41.025289744 +0000 UTC m=+1056.986536777" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.031435 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pw6nw"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.032297 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.034702 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.034754 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.034825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t5chk" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.045292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pw6nw"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.102899 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.103018 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzxb\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-kube-api-access-flzxb\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.204732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.204826 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flzxb\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-kube-api-access-flzxb\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.229541 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.229604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flzxb\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-kube-api-access-flzxb\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.348349 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.587660 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pw6nw"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.659830 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" event={"ID":"7670008a-1d21-4255-8148-e85ac90a90d4","Type":"ContainerStarted","Data":"09af48dcacf329d9668f08a5ac87afb59194674753e83ccf1e1557c838f5bdbb"} Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.987964 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x57ft"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.989050 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.991175 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-w9v6x" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.997463 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x57ft"] Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.137183 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.137267 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dnfc\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-kube-api-access-8dnfc\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.238298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.238405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dnfc\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-kube-api-access-8dnfc\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.258656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.260752 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dnfc\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-kube-api-access-8dnfc\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.359318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.630754 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x57ft"] Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.665706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" event={"ID":"34da3314-5047-419b-8c7b-927cc2f00d8c","Type":"ContainerStarted","Data":"4716f37d88c507d4f77143a58754d3e30915b797a975dd36d692d65c23fd9278"} Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.358556 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-f88tb"] Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.361659 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.370719 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jlp55" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.374311 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-f88tb"] Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.519485 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-bound-sa-token\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.519563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfl9f\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-kube-api-access-vfl9f\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.621396 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-bound-sa-token\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.621584 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfl9f\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-kube-api-access-vfl9f\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.645513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfl9f\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-kube-api-access-vfl9f\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.653013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-bound-sa-token\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.688463 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.924712 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-f88tb"] Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.766370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" event={"ID":"34da3314-5047-419b-8c7b-927cc2f00d8c","Type":"ContainerStarted","Data":"668eba0bc9f62d1b85e23f7dc45ff71e36ed3431cfcd0b7c346a67f8d3f54af9"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.767905 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" event={"ID":"7670008a-1d21-4255-8148-e85ac90a90d4","Type":"ContainerStarted","Data":"48dda56076e06d540b8a6445ac3ae4a7f3e4500ff91bdb0119d81dae9c34a8bc"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.768057 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.769510 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-f88tb" event={"ID":"99fcd41b-c557-4bf0-abbb-b189f4aaaf41","Type":"ContainerStarted","Data":"a687ef5acc78c48e132549afa432dcadfce09aa417c3e91cb10c78cbeb9cb261"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.769566 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-f88tb" event={"ID":"99fcd41b-c557-4bf0-abbb-b189f4aaaf41","Type":"ContainerStarted","Data":"aeb94cfdef36afec93b13f04e7596a13af2da580ba44e80e72488fcd4f79c1f7"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.792683 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" podStartSLOduration=2.56229683 podStartE2EDuration="15.792651748s" podCreationTimestamp="2026-01-30 21:57:41 +0000 UTC" firstStartedPulling="2026-01-30 21:57:42.639612712 +0000 UTC m=+1058.600859745" lastFinishedPulling="2026-01-30 21:57:55.86996763 +0000 UTC m=+1071.831214663" observedRunningTime="2026-01-30 21:57:56.78571636 +0000 UTC m=+1072.746963413" watchObservedRunningTime="2026-01-30 21:57:56.792651748 +0000 UTC m=+1072.753898801" Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.816712 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" podStartSLOduration=1.569486776 podStartE2EDuration="15.816685067s" podCreationTimestamp="2026-01-30 21:57:41 +0000 UTC" firstStartedPulling="2026-01-30 21:57:41.604909476 +0000 UTC m=+1057.566156499" lastFinishedPulling="2026-01-30 21:57:55.852107757 +0000 UTC m=+1071.813354790" observedRunningTime="2026-01-30 21:57:56.807766147 +0000 UTC m=+1072.769013210" watchObservedRunningTime="2026-01-30 21:57:56.816685067 +0000 UTC m=+1072.777932100" Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.839495 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-f88tb" podStartSLOduration=5.839468973 podStartE2EDuration="5.839468973s" podCreationTimestamp="2026-01-30 21:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:57:56.834023296 +0000 UTC m=+1072.795270329" watchObservedRunningTime="2026-01-30 21:57:56.839468973 +0000 UTC m=+1072.800716006" Jan 30 21:58:01 crc kubenswrapper[4979]: I0130 21:58:01.351513 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:58:02 crc kubenswrapper[4979]: I0130 21:58:02.040002 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:02 crc kubenswrapper[4979]: I0130 21:58:02.040157 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.208169 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.209331 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.213230 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.213230 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.213230 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-b8s5h" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.239772 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.320275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"openstack-operator-index-xlffw\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.422185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"openstack-operator-index-xlffw\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.444098 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"openstack-operator-index-xlffw\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.541758 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.974369 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:05 crc kubenswrapper[4979]: I0130 21:58:05.836431 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerStarted","Data":"0c35f707623f9b4df2cd5ad136ddab4f99c10bc8eb3cd2fa22c7087fbcb0d077"} Jan 30 21:58:07 crc kubenswrapper[4979]: I0130 21:58:07.575496 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:07 crc kubenswrapper[4979]: I0130 21:58:07.850370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerStarted","Data":"a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2"} Jan 30 21:58:07 crc kubenswrapper[4979]: I0130 21:58:07.868309 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xlffw" podStartSLOduration=1.625783265 podStartE2EDuration="3.868289817s" podCreationTimestamp="2026-01-30 21:58:04 +0000 UTC" firstStartedPulling="2026-01-30 21:58:04.986207222 +0000 UTC m=+1080.947454255" lastFinishedPulling="2026-01-30 21:58:07.228713754 +0000 UTC m=+1083.189960807" observedRunningTime="2026-01-30 21:58:07.865483961 +0000 UTC m=+1083.826730994" watchObservedRunningTime="2026-01-30 21:58:07.868289817 +0000 UTC m=+1083.829536840" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.176563 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jl5wf"] Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.177543 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.193388 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl5wf"] Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.289372 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvsh\" (UniqueName: \"kubernetes.io/projected/bb59579b-3a3c-4ae9-b3fe-d4231a17e050-kube-api-access-4bvsh\") pod \"openstack-operator-index-jl5wf\" (UID: \"bb59579b-3a3c-4ae9-b3fe-d4231a17e050\") " pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.391124 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bvsh\" (UniqueName: \"kubernetes.io/projected/bb59579b-3a3c-4ae9-b3fe-d4231a17e050-kube-api-access-4bvsh\") pod \"openstack-operator-index-jl5wf\" (UID: \"bb59579b-3a3c-4ae9-b3fe-d4231a17e050\") " pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.412989 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bvsh\" (UniqueName: \"kubernetes.io/projected/bb59579b-3a3c-4ae9-b3fe-d4231a17e050-kube-api-access-4bvsh\") pod \"openstack-operator-index-jl5wf\" (UID: \"bb59579b-3a3c-4ae9-b3fe-d4231a17e050\") " pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.507001 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.856361 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xlffw" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" containerID="cri-o://a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2" gracePeriod=2 Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.999830 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl5wf"] Jan 30 21:58:09 crc kubenswrapper[4979]: W0130 21:58:09.008804 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb59579b_3a3c_4ae9_b3fe_d4231a17e050.slice/crio-88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c WatchSource:0}: Error finding container 88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c: Status 404 returned error can't find the container with id 88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c Jan 30 21:58:09 crc kubenswrapper[4979]: I0130 21:58:09.865416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl5wf" event={"ID":"bb59579b-3a3c-4ae9-b3fe-d4231a17e050","Type":"ContainerStarted","Data":"88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c"} Jan 30 21:58:10 crc kubenswrapper[4979]: I0130 21:58:10.873583 4979 generic.go:334] "Generic (PLEG): container finished" podID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerID="a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2" exitCode=0 Jan 30 21:58:10 crc kubenswrapper[4979]: I0130 21:58:10.873671 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerDied","Data":"a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2"} Jan 30 21:58:10 crc kubenswrapper[4979]: I0130 21:58:10.988179 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.141193 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"f3c416ee-b90c-4c0f-b679-b10f3468c224\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.149025 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg" (OuterVolumeSpecName: "kube-api-access-r64fg") pod "f3c416ee-b90c-4c0f-b679-b10f3468c224" (UID: "f3c416ee-b90c-4c0f-b679-b10f3468c224"). InnerVolumeSpecName "kube-api-access-r64fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.242999 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.883631 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl5wf" event={"ID":"bb59579b-3a3c-4ae9-b3fe-d4231a17e050","Type":"ContainerStarted","Data":"f53432ae5b86757feaf6b7f8344f90cf11c0b080240b539ea750b263490f1563"} Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.886301 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerDied","Data":"0c35f707623f9b4df2cd5ad136ddab4f99c10bc8eb3cd2fa22c7087fbcb0d077"} Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.886377 4979 scope.go:117] "RemoveContainer" containerID="a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.886619 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.919907 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jl5wf" podStartSLOduration=1.8940516490000001 podStartE2EDuration="3.919863012s" podCreationTimestamp="2026-01-30 21:58:08 +0000 UTC" firstStartedPulling="2026-01-30 21:58:09.013690526 +0000 UTC m=+1084.974937559" lastFinishedPulling="2026-01-30 21:58:11.039501899 +0000 UTC m=+1087.000748922" observedRunningTime="2026-01-30 21:58:11.903281244 +0000 UTC m=+1087.864528277" watchObservedRunningTime="2026-01-30 21:58:11.919863012 +0000 UTC m=+1087.881110045" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.930412 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.934458 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:13 crc kubenswrapper[4979]: I0130 21:58:13.081210 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" path="/var/lib/kubelet/pods/f3c416ee-b90c-4c0f-b679-b10f3468c224/volumes" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.507523 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.508280 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.553340 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.985498 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.032955 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf"] Jan 30 21:58:20 crc kubenswrapper[4979]: E0130 21:58:20.033513 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.033541 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.033772 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.035597 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.038309 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wbt8z" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.041172 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf"] Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.086851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.086903 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.087278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.188757 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.188864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.188894 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.189362 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.189506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.213885 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.363926 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.628003 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf"] Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.965239 4979 generic.go:334] "Generic (PLEG): container finished" podID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerID="7570d588bbcf72632b6e3c445a99405e91f58f126b88077afde432d1fcac2dfe" exitCode=0 Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.965302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"7570d588bbcf72632b6e3c445a99405e91f58f126b88077afde432d1fcac2dfe"} Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.965340 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerStarted","Data":"992f2cb14c18dcf574f093a0a0067c3fbdbc7d307e0de5a7ae550e21f4f53948"} Jan 30 21:58:21 crc kubenswrapper[4979]: I0130 21:58:21.973339 4979 generic.go:334] "Generic (PLEG): container finished" podID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerID="3c0861ecad82b499f7f4e3c57750243ee6aea3aed93a08fcb20be4fc5a75d352" exitCode=0 Jan 30 21:58:21 crc kubenswrapper[4979]: I0130 21:58:21.973390 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"3c0861ecad82b499f7f4e3c57750243ee6aea3aed93a08fcb20be4fc5a75d352"} Jan 30 21:58:22 crc kubenswrapper[4979]: I0130 21:58:22.984152 4979 generic.go:334] "Generic (PLEG): container finished" podID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerID="01f3ec01eefaa6ffc24c194ebfba2d370bd7e52c428aca05c86c41cce7d455d7" exitCode=0 Jan 30 21:58:22 crc kubenswrapper[4979]: I0130 21:58:22.984265 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"01f3ec01eefaa6ffc24c194ebfba2d370bd7e52c428aca05c86c41cce7d455d7"} Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.221433 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.354948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.355664 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.355711 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.356955 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle" (OuterVolumeSpecName: "bundle") pod "b788bb72-addf-4df0-9fa8-e27fb8e1e10a" (UID: "b788bb72-addf-4df0-9fa8-e27fb8e1e10a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.367154 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz" (OuterVolumeSpecName: "kube-api-access-7gwhz") pod "b788bb72-addf-4df0-9fa8-e27fb8e1e10a" (UID: "b788bb72-addf-4df0-9fa8-e27fb8e1e10a"). InnerVolumeSpecName "kube-api-access-7gwhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.371407 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util" (OuterVolumeSpecName: "util") pod "b788bb72-addf-4df0-9fa8-e27fb8e1e10a" (UID: "b788bb72-addf-4df0-9fa8-e27fb8e1e10a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.458483 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.458547 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.458571 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:25 crc kubenswrapper[4979]: I0130 21:58:25.007230 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"992f2cb14c18dcf574f093a0a0067c3fbdbc7d307e0de5a7ae550e21f4f53948"} Jan 30 21:58:25 crc kubenswrapper[4979]: I0130 21:58:25.007278 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:25 crc kubenswrapper[4979]: I0130 21:58:25.007296 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992f2cb14c18dcf574f093a0a0067c3fbdbc7d307e0de5a7ae550e21f4f53948" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.419595 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw"] Jan 30 21:58:27 crc kubenswrapper[4979]: E0130 21:58:27.420440 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="extract" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420458 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="extract" Jan 30 21:58:27 crc kubenswrapper[4979]: E0130 21:58:27.420479 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="util" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420488 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="util" Jan 30 21:58:27 crc kubenswrapper[4979]: E0130 21:58:27.420509 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="pull" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420518 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="pull" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420678 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="extract" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.421304 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.429680 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-62pb8" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.458064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw"] Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.510129 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcfl\" (UniqueName: \"kubernetes.io/projected/9a874b50-c515-45d3-8562-05532a2c5adc-kube-api-access-mmcfl\") pod \"openstack-operator-controller-init-7c7d885c49-dmwtw\" (UID: \"9a874b50-c515-45d3-8562-05532a2c5adc\") " pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.611421 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcfl\" (UniqueName: \"kubernetes.io/projected/9a874b50-c515-45d3-8562-05532a2c5adc-kube-api-access-mmcfl\") pod \"openstack-operator-controller-init-7c7d885c49-dmwtw\" (UID: \"9a874b50-c515-45d3-8562-05532a2c5adc\") " pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.635048 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcfl\" (UniqueName: \"kubernetes.io/projected/9a874b50-c515-45d3-8562-05532a2c5adc-kube-api-access-mmcfl\") pod \"openstack-operator-controller-init-7c7d885c49-dmwtw\" (UID: \"9a874b50-c515-45d3-8562-05532a2c5adc\") " pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.746124 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:28 crc kubenswrapper[4979]: I0130 21:58:28.048264 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw"] Jan 30 21:58:29 crc kubenswrapper[4979]: I0130 21:58:29.041958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" event={"ID":"9a874b50-c515-45d3-8562-05532a2c5adc","Type":"ContainerStarted","Data":"5a1ce447f0756d7869a65fa28287907b1677da6df0bd8352bedb7b7acf9acb51"} Jan 30 21:58:32 crc kubenswrapper[4979]: I0130 21:58:32.039638 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:32 crc kubenswrapper[4979]: I0130 21:58:32.040043 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:58:33 crc kubenswrapper[4979]: I0130 21:58:33.081749 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" event={"ID":"9a874b50-c515-45d3-8562-05532a2c5adc","Type":"ContainerStarted","Data":"e0e32a9adce0eeecd646e8c3b8b6d62c5327b78877f3a3031f9c800cf98bc14c"} Jan 30 21:58:33 crc kubenswrapper[4979]: I0130 21:58:33.081958 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:33 crc kubenswrapper[4979]: I0130 21:58:33.133856 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" podStartSLOduration=2.244601882 podStartE2EDuration="6.133832328s" podCreationTimestamp="2026-01-30 21:58:27 +0000 UTC" firstStartedPulling="2026-01-30 21:58:28.065890023 +0000 UTC m=+1104.027137056" lastFinishedPulling="2026-01-30 21:58:31.955120469 +0000 UTC m=+1107.916367502" observedRunningTime="2026-01-30 21:58:33.127426136 +0000 UTC m=+1109.088673169" watchObservedRunningTime="2026-01-30 21:58:33.133832328 +0000 UTC m=+1109.095079371" Jan 30 21:58:37 crc kubenswrapper[4979]: I0130 21:58:37.750247 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.173619 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.175636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.181350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t26h5" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.189247 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.190323 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.192438 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zntj5" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.200718 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.202108 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.229596 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n7vkg" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.249458 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjpg\" (UniqueName: \"kubernetes.io/projected/9134e6d2-b638-49be-9612-be12250e0a6d-kube-api-access-qbjpg\") pod \"designate-operator-controller-manager-8f4c5cb64-5k7wd\" (UID: \"9134e6d2-b638-49be-9612-be12250e0a6d\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.249542 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhm6\" (UniqueName: \"kubernetes.io/projected/dcd08638-857d-40cd-a92c-b6dcef0bc329-kube-api-access-xbhm6\") pod \"barbican-operator-controller-manager-fc589b45f-r2mb8\" (UID: \"dcd08638-857d-40cd-a92c-b6dcef0bc329\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.249575 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvv96\" (UniqueName: \"kubernetes.io/projected/11771b88-abd2-436e-a95c-5113a5bae88b-kube-api-access-dvv96\") pod \"cinder-operator-controller-manager-787499fbb-p95sz\" (UID: \"11771b88-abd2-436e-a95c-5113a5bae88b\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.251124 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.275060 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.350751 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjpg\" (UniqueName: \"kubernetes.io/projected/9134e6d2-b638-49be-9612-be12250e0a6d-kube-api-access-qbjpg\") pod \"designate-operator-controller-manager-8f4c5cb64-5k7wd\" (UID: \"9134e6d2-b638-49be-9612-be12250e0a6d\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.350811 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbhm6\" (UniqueName: \"kubernetes.io/projected/dcd08638-857d-40cd-a92c-b6dcef0bc329-kube-api-access-xbhm6\") pod \"barbican-operator-controller-manager-fc589b45f-r2mb8\" (UID: \"dcd08638-857d-40cd-a92c-b6dcef0bc329\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.350833 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvv96\" (UniqueName: \"kubernetes.io/projected/11771b88-abd2-436e-a95c-5113a5bae88b-kube-api-access-dvv96\") pod \"cinder-operator-controller-manager-787499fbb-p95sz\" (UID: \"11771b88-abd2-436e-a95c-5113a5bae88b\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.364596 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.393180 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.394429 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.405113 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.416385 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tpqqj" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.423144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvv96\" (UniqueName: \"kubernetes.io/projected/11771b88-abd2-436e-a95c-5113a5bae88b-kube-api-access-dvv96\") pod \"cinder-operator-controller-manager-787499fbb-p95sz\" (UID: \"11771b88-abd2-436e-a95c-5113a5bae88b\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.425738 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjpg\" (UniqueName: \"kubernetes.io/projected/9134e6d2-b638-49be-9612-be12250e0a6d-kube-api-access-qbjpg\") pod \"designate-operator-controller-manager-8f4c5cb64-5k7wd\" (UID: \"9134e6d2-b638-49be-9612-be12250e0a6d\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.451396 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbfx\" (UniqueName: \"kubernetes.io/projected/8893a935-e9c7-4d38-ae0c-17a94445475f-kube-api-access-qrbfx\") pod \"glance-operator-controller-manager-6bfc9d4d48-zqjfh\" (UID: \"8893a935-e9c7-4d38-ae0c-17a94445475f\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.457795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbhm6\" (UniqueName: \"kubernetes.io/projected/dcd08638-857d-40cd-a92c-b6dcef0bc329-kube-api-access-xbhm6\") pod \"barbican-operator-controller-manager-fc589b45f-r2mb8\" (UID: \"dcd08638-857d-40cd-a92c-b6dcef0bc329\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.461306 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.462308 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.466494 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-p27f6" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.492730 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9q469"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.493799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.495482 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.501004 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.501771 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.512489 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.513983 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.514244 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cwwdt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.514405 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x2mf7" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.533203 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.553840 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbfx\" (UniqueName: \"kubernetes.io/projected/8893a935-e9c7-4d38-ae0c-17a94445475f-kube-api-access-qrbfx\") pod \"glance-operator-controller-manager-6bfc9d4d48-zqjfh\" (UID: \"8893a935-e9c7-4d38-ae0c-17a94445475f\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.583111 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.614640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbfx\" (UniqueName: \"kubernetes.io/projected/8893a935-e9c7-4d38-ae0c-17a94445475f-kube-api-access-qrbfx\") pod \"glance-operator-controller-manager-6bfc9d4d48-zqjfh\" (UID: \"8893a935-e9c7-4d38-ae0c-17a94445475f\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.622265 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657126 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xkvj\" (UniqueName: \"kubernetes.io/projected/5966d922-4db9-40f7-baf1-5624f1a033d6-kube-api-access-2xkvj\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcvw\" (UniqueName: \"kubernetes.io/projected/07393de3-4dbb-4de1-a7fc-49785a623de2-kube-api-access-mlcvw\") pod \"horizon-operator-controller-manager-5fb775575f-5pmpx\" (UID: \"07393de3-4dbb-4de1-a7fc-49785a623de2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657290 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvhs\" (UniqueName: \"kubernetes.io/projected/0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc-kube-api-access-7dvhs\") pod \"heat-operator-controller-manager-65dc6c8d9c-h59f2\" (UID: \"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.674151 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.675454 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.685491 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-l6x25" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.714354 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9q469"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.726690 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762396 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7flt\" (UniqueName: \"kubernetes.io/projected/9c8cf87b-4069-497d-9fcc-3b7be476ed4d-kube-api-access-c7flt\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-lrqnv\" (UID: \"9c8cf87b-4069-497d-9fcc-3b7be476ed4d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xkvj\" (UniqueName: \"kubernetes.io/projected/5966d922-4db9-40f7-baf1-5624f1a033d6-kube-api-access-2xkvj\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762495 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcvw\" (UniqueName: \"kubernetes.io/projected/07393de3-4dbb-4de1-a7fc-49785a623de2-kube-api-access-mlcvw\") pod \"horizon-operator-controller-manager-5fb775575f-5pmpx\" (UID: \"07393de3-4dbb-4de1-a7fc-49785a623de2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvhs\" (UniqueName: \"kubernetes.io/projected/0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc-kube-api-access-7dvhs\") pod \"heat-operator-controller-manager-65dc6c8d9c-h59f2\" (UID: \"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: E0130 21:58:55.763289 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:55 crc kubenswrapper[4979]: E0130 21:58:55.763357 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:56.263330939 +0000 UTC m=+1132.224577972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.791109 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.792085 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.809245 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.825404 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.832136 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.833423 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.836044 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.837220 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9t9d5" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.837302 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qk6v7" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.850848 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7vlqh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865178 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6z9\" (UniqueName: \"kubernetes.io/projected/777d41f5-6e7f-4099-9f6f-aceaf0b972da-kube-api-access-8h6z9\") pod \"mariadb-operator-controller-manager-67bf948998-6bb56\" (UID: \"777d41f5-6e7f-4099-9f6f-aceaf0b972da\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2g8p\" (UniqueName: \"kubernetes.io/projected/39f45c61-20b7-4d98-98af-526018a240c1-kube-api-access-c2g8p\") pod \"keystone-operator-controller-manager-64469b487f-g6pnt\" (UID: \"39f45c61-20b7-4d98-98af-526018a240c1\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865306 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8x6q\" (UniqueName: \"kubernetes.io/projected/7f396cc2-4739-4401-9319-36881d4f449d-kube-api-access-l8x6q\") pod \"manila-operator-controller-manager-7d96d95959-5s8xm\" (UID: \"7f396cc2-4739-4401-9319-36881d4f449d\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7flt\" (UniqueName: \"kubernetes.io/projected/9c8cf87b-4069-497d-9fcc-3b7be476ed4d-kube-api-access-c7flt\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-lrqnv\" (UID: \"9c8cf87b-4069-497d-9fcc-3b7be476ed4d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.870649 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.872650 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvhs\" (UniqueName: \"kubernetes.io/projected/0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc-kube-api-access-7dvhs\") pod \"heat-operator-controller-manager-65dc6c8d9c-h59f2\" (UID: \"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.886687 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcvw\" (UniqueName: \"kubernetes.io/projected/07393de3-4dbb-4de1-a7fc-49785a623de2-kube-api-access-mlcvw\") pod \"horizon-operator-controller-manager-5fb775575f-5pmpx\" (UID: \"07393de3-4dbb-4de1-a7fc-49785a623de2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.886855 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xkvj\" (UniqueName: \"kubernetes.io/projected/5966d922-4db9-40f7-baf1-5624f1a033d6-kube-api-access-2xkvj\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.916479 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.941178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.941423 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7flt\" (UniqueName: \"kubernetes.io/projected/9c8cf87b-4069-497d-9fcc-3b7be476ed4d-kube-api-access-c7flt\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-lrqnv\" (UID: \"9c8cf87b-4069-497d-9fcc-3b7be476ed4d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.970364 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.971112 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6z9\" (UniqueName: \"kubernetes.io/projected/777d41f5-6e7f-4099-9f6f-aceaf0b972da-kube-api-access-8h6z9\") pod \"mariadb-operator-controller-manager-67bf948998-6bb56\" (UID: \"777d41f5-6e7f-4099-9f6f-aceaf0b972da\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.971167 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2g8p\" (UniqueName: \"kubernetes.io/projected/39f45c61-20b7-4d98-98af-526018a240c1-kube-api-access-c2g8p\") pod \"keystone-operator-controller-manager-64469b487f-g6pnt\" (UID: \"39f45c61-20b7-4d98-98af-526018a240c1\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.971199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8x6q\" (UniqueName: \"kubernetes.io/projected/7f396cc2-4739-4401-9319-36881d4f449d-kube-api-access-l8x6q\") pod \"manila-operator-controller-manager-7d96d95959-5s8xm\" (UID: \"7f396cc2-4739-4401-9319-36881d4f449d\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.987136 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.023345 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-v774d"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.024357 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.025193 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.036019 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ltj68" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.047120 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-v774d"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.074334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8kz\" (UniqueName: \"kubernetes.io/projected/31481495-f181-449a-887e-ed58bf88c783-kube-api-access-sq8kz\") pod \"neutron-operator-controller-manager-576995988b-v774d\" (UID: \"31481495-f181-449a-887e-ed58bf88c783\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.077171 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.088200 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.102906 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.110592 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.118986 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.121612 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dnp56" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.133776 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vvt9t" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.137502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6z9\" (UniqueName: \"kubernetes.io/projected/777d41f5-6e7f-4099-9f6f-aceaf0b972da-kube-api-access-8h6z9\") pod \"mariadb-operator-controller-manager-67bf948998-6bb56\" (UID: \"777d41f5-6e7f-4099-9f6f-aceaf0b972da\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.137564 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.138835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8x6q\" (UniqueName: \"kubernetes.io/projected/7f396cc2-4739-4401-9319-36881d4f449d-kube-api-access-l8x6q\") pod \"manila-operator-controller-manager-7d96d95959-5s8xm\" (UID: \"7f396cc2-4739-4401-9319-36881d4f449d\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.166772 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2g8p\" (UniqueName: \"kubernetes.io/projected/39f45c61-20b7-4d98-98af-526018a240c1-kube-api-access-c2g8p\") pod \"keystone-operator-controller-manager-64469b487f-g6pnt\" (UID: \"39f45c61-20b7-4d98-98af-526018a240c1\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.185474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8kz\" (UniqueName: \"kubernetes.io/projected/31481495-f181-449a-887e-ed58bf88c783-kube-api-access-sq8kz\") pod \"neutron-operator-controller-manager-576995988b-v774d\" (UID: \"31481495-f181-449a-887e-ed58bf88c783\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.259299 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8kz\" (UniqueName: \"kubernetes.io/projected/31481495-f181-449a-887e-ed58bf88c783-kube-api-access-sq8kz\") pod \"neutron-operator-controller-manager-576995988b-v774d\" (UID: \"31481495-f181-449a-887e-ed58bf88c783\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.271582 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.282511 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.289696 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.289794 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7tx\" (UniqueName: \"kubernetes.io/projected/1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7-kube-api-access-vs7tx\") pod \"nova-operator-controller-manager-5644b66645-lz8dw\" (UID: \"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.289821 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq5km\" (UniqueName: \"kubernetes.io/projected/73527aaf-5de3-4a3e-aa4c-f2ac98e5be11-kube-api-access-vq5km\") pod \"octavia-operator-controller-manager-694c6dcf95-58s6k\" (UID: \"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.292342 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.292410 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.292389023 +0000 UTC m=+1133.253636056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.292532 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.297620 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.330125 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bflpz" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.360317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.400567 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.416000 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.443199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs7tx\" (UniqueName: \"kubernetes.io/projected/1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7-kube-api-access-vs7tx\") pod \"nova-operator-controller-manager-5644b66645-lz8dw\" (UID: \"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.443285 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq5km\" (UniqueName: \"kubernetes.io/projected/73527aaf-5de3-4a3e-aa4c-f2ac98e5be11-kube-api-access-vq5km\") pod \"octavia-operator-controller-manager-694c6dcf95-58s6k\" (UID: \"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.443550 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pqf\" (UniqueName: \"kubernetes.io/projected/82a19f5f-9a94-4b08-8795-22fce21897bf-kube-api-access-l8pqf\") pod \"ovn-operator-controller-manager-788c46999f-6f7vv\" (UID: \"82a19f5f-9a94-4b08-8795-22fce21897bf\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.445254 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.461359 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.461807 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2bfdd" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.499861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq5km\" (UniqueName: \"kubernetes.io/projected/73527aaf-5de3-4a3e-aa4c-f2ac98e5be11-kube-api-access-vq5km\") pod \"octavia-operator-controller-manager-694c6dcf95-58s6k\" (UID: \"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.511914 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs7tx\" (UniqueName: \"kubernetes.io/projected/1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7-kube-api-access-vs7tx\") pod \"nova-operator-controller-manager-5644b66645-lz8dw\" (UID: \"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.516093 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.528458 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.532241 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9nkdp" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.542939 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.544460 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.555413 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zxtwh" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.564320 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.564479 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pqf\" (UniqueName: \"kubernetes.io/projected/82a19f5f-9a94-4b08-8795-22fce21897bf-kube-api-access-l8pqf\") pod \"ovn-operator-controller-manager-788c46999f-6f7vv\" (UID: \"82a19f5f-9a94-4b08-8795-22fce21897bf\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.564647 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqbc\" (UniqueName: \"kubernetes.io/projected/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-kube-api-access-tkqbc\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.577148 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.585465 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.586596 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.592819 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dnmxb" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.601470 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.625730 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.639889 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pqf\" (UniqueName: \"kubernetes.io/projected/82a19f5f-9a94-4b08-8795-22fce21897bf-kube-api-access-l8pqf\") pod \"ovn-operator-controller-manager-788c46999f-6f7vv\" (UID: \"82a19f5f-9a94-4b08-8795-22fce21897bf\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.652263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.667446 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqbc\" (UniqueName: \"kubernetes.io/projected/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-kube-api-access-tkqbc\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.667966 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n9js\" (UniqueName: \"kubernetes.io/projected/bf959f71-8af9-4121-888f-13207cc2e1d0-kube-api-access-9n9js\") pod \"telemetry-operator-controller-manager-69484b8d9d-nc5fg\" (UID: \"bf959f71-8af9-4121-888f-13207cc2e1d0\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.668043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frr8f\" (UniqueName: \"kubernetes.io/projected/c15b97e5-3fe4-4f42-9501-b4c7c083bdbb-kube-api-access-frr8f\") pod \"swift-operator-controller-manager-566d8d7445-78f4b\" (UID: \"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.668089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.668111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztq4\" (UniqueName: \"kubernetes.io/projected/cf2e278a-e0cb-4505-bd08-38c02155a632-kube-api-access-sztq4\") pod \"placement-operator-controller-manager-5b964cf4cd-7f98k\" (UID: \"cf2e278a-e0cb-4505-bd08-38c02155a632\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.669665 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.669749 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.169726056 +0000 UTC m=+1133.130973099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.685409 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.704274 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.708229 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqbc\" (UniqueName: \"kubernetes.io/projected/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-kube-api-access-tkqbc\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.711355 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.728559 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.749210 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.750517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.753744 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8m6mf" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773096 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n9js\" (UniqueName: \"kubernetes.io/projected/bf959f71-8af9-4121-888f-13207cc2e1d0-kube-api-access-9n9js\") pod \"telemetry-operator-controller-manager-69484b8d9d-nc5fg\" (UID: \"bf959f71-8af9-4121-888f-13207cc2e1d0\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773207 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frr8f\" (UniqueName: \"kubernetes.io/projected/c15b97e5-3fe4-4f42-9501-b4c7c083bdbb-kube-api-access-frr8f\") pod \"swift-operator-controller-manager-566d8d7445-78f4b\" (UID: \"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773272 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48t77\" (UniqueName: \"kubernetes.io/projected/baa9dff2-93f9-4590-a86d-cd891b4273f2-kube-api-access-48t77\") pod \"test-operator-controller-manager-56f8bfcd9f-57br8\" (UID: \"baa9dff2-93f9-4590-a86d-cd891b4273f2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773312 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztq4\" (UniqueName: \"kubernetes.io/projected/cf2e278a-e0cb-4505-bd08-38c02155a632-kube-api-access-sztq4\") pod \"placement-operator-controller-manager-5b964cf4cd-7f98k\" (UID: \"cf2e278a-e0cb-4505-bd08-38c02155a632\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.787147 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.815082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n9js\" (UniqueName: \"kubernetes.io/projected/bf959f71-8af9-4121-888f-13207cc2e1d0-kube-api-access-9n9js\") pod \"telemetry-operator-controller-manager-69484b8d9d-nc5fg\" (UID: \"bf959f71-8af9-4121-888f-13207cc2e1d0\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.821120 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.822365 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.829193 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frr8f\" (UniqueName: \"kubernetes.io/projected/c15b97e5-3fe4-4f42-9501-b4c7c083bdbb-kube-api-access-frr8f\") pod \"swift-operator-controller-manager-566d8d7445-78f4b\" (UID: \"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.832175 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mf7d7" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.833715 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztq4\" (UniqueName: \"kubernetes.io/projected/cf2e278a-e0cb-4505-bd08-38c02155a632-kube-api-access-sztq4\") pod \"placement-operator-controller-manager-5b964cf4cd-7f98k\" (UID: \"cf2e278a-e0cb-4505-bd08-38c02155a632\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.851114 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.868525 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.870108 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873535 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873905 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txjl\" (UniqueName: \"kubernetes.io/projected/2487dbd3-ca49-4b26-99e3-2c858b549944-kube-api-access-2txjl\") pod \"watcher-operator-controller-manager-586b95b788-dpkrg\" (UID: \"2487dbd3-ca49-4b26-99e3-2c858b549944\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48t77\" (UniqueName: \"kubernetes.io/projected/baa9dff2-93f9-4590-a86d-cd891b4273f2-kube-api-access-48t77\") pod \"test-operator-controller-manager-56f8bfcd9f-57br8\" (UID: \"baa9dff2-93f9-4590-a86d-cd891b4273f2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873979 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.874050 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwc8\" (UniqueName: \"kubernetes.io/projected/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-kube-api-access-5xwc8\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.874129 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.888051 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.888286 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.888434 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k8hfr" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.916798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48t77\" (UniqueName: \"kubernetes.io/projected/baa9dff2-93f9-4590-a86d-cd891b4273f2-kube-api-access-48t77\") pod \"test-operator-controller-manager-56f8bfcd9f-57br8\" (UID: \"baa9dff2-93f9-4590-a86d-cd891b4273f2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.922498 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.923052 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.923766 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.936731 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8w5lt" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.954187 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.971668 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986299 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txjl\" (UniqueName: \"kubernetes.io/projected/2487dbd3-ca49-4b26-99e3-2c858b549944-kube-api-access-2txjl\") pod \"watcher-operator-controller-manager-586b95b788-dpkrg\" (UID: \"2487dbd3-ca49-4b26-99e3-2c858b549944\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986411 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwc8\" (UniqueName: \"kubernetes.io/projected/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-kube-api-access-5xwc8\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.986725 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.986801 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.486775348 +0000 UTC m=+1133.448022381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.990187 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.998277 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.496298516 +0000 UTC m=+1133.457545549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.030987 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txjl\" (UniqueName: \"kubernetes.io/projected/2487dbd3-ca49-4b26-99e3-2c858b549944-kube-api-access-2txjl\") pod \"watcher-operator-controller-manager-586b95b788-dpkrg\" (UID: \"2487dbd3-ca49-4b26-99e3-2c858b549944\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.031731 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.031871 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwc8\" (UniqueName: \"kubernetes.io/projected/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-kube-api-access-5xwc8\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.046472 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.088563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czf24\" (UniqueName: \"kubernetes.io/projected/788f4d92-590f-44b1-8b93-a15b9f88b052-kube-api-access-czf24\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r4rcx\" (UID: \"788f4d92-590f-44b1-8b93-a15b9f88b052\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.090514 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.125550 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.177360 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.191253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.191378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czf24\" (UniqueName: \"kubernetes.io/projected/788f4d92-590f-44b1-8b93-a15b9f88b052-kube-api-access-czf24\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r4rcx\" (UID: \"788f4d92-590f-44b1-8b93-a15b9f88b052\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.191915 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.191967 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:58:58.191948536 +0000 UTC m=+1134.153195569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.233436 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czf24\" (UniqueName: \"kubernetes.io/projected/788f4d92-590f-44b1-8b93-a15b9f88b052-kube-api-access-czf24\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r4rcx\" (UID: \"788f4d92-590f-44b1-8b93-a15b9f88b052\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.246937 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.272742 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" event={"ID":"dcd08638-857d-40cd-a92c-b6dcef0bc329","Type":"ContainerStarted","Data":"66ec391d58e9960ec214debe0d680ccf5cb75d3e9c5a3e678db1222022950789"} Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.293741 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.294308 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.294361 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:59.294343344 +0000 UTC m=+1135.255590377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.298996 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" event={"ID":"11771b88-abd2-436e-a95c-5113a5bae88b","Type":"ContainerStarted","Data":"cc46f58c8018b2ead10d44569d9ab7afbb58be368ff92e05e6b217db3c7973f5"} Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.339455 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd"] Jan 30 21:58:57 crc kubenswrapper[4979]: W0130 21:58:57.359947 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9134e6d2_b638_49be_9612_be12250e0a6d.slice/crio-6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e WatchSource:0}: Error finding container 6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e: Status 404 returned error can't find the container with id 6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.496348 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.496631 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.496780 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:58.496753007 +0000 UTC m=+1134.458000040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.601168 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.601571 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.601683 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:58.601653883 +0000 UTC m=+1134.562900916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.720145 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.728477 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.741423 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.754155 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.777302 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw"] Jan 30 21:58:57 crc kubenswrapper[4979]: W0130 21:58:57.806271 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe4c32c_a00c_41e9_a15d_d1ff4cedf9f7.slice/crio-898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d WatchSource:0}: Error finding container 898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d: Status 404 returned error can't find the container with id 898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.842427 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.876733 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.909230 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.134409 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.152219 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.158790 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.175327 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b"] Jan 30 21:58:58 crc kubenswrapper[4979]: W0130 21:58:58.178829 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2487dbd3_ca49_4b26_99e3_2c858b549944.slice/crio-2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b WatchSource:0}: Error finding container 2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b: Status 404 returned error can't find the container with id 2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.231627 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.231831 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.231888 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:00.231865562 +0000 UTC m=+1136.193112595 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.235538 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv"] Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.245271 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sq8kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-v774d_openstack-operators(31481495-f181-449a-887e-ed58bf88c783): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: W0130 21:58:58.259195 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc15b97e5_3fe4_4f42_9501_b4c7c083bdbb.slice/crio-e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5 WatchSource:0}: Error finding container e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5: Status 404 returned error can't find the container with id e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5 Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.259631 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podUID="31481495-f181-449a-887e-ed58bf88c783" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.268869 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-48t77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-57br8_openstack-operators(baa9dff2-93f9-4590-a86d-cd891b4273f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.269992 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podUID="baa9dff2-93f9-4590-a86d-cd891b4273f2" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.276093 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9n9js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-69484b8d9d-nc5fg_openstack-operators(bf959f71-8af9-4121-888f-13207cc2e1d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.277409 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podUID="bf959f71-8af9-4121-888f-13207cc2e1d0" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.279340 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frr8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-566d8d7445-78f4b_openstack-operators(c15b97e5-3fe4-4f42-9501-b4c7c083bdbb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.279451 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czf24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-r4rcx_openstack-operators(788f4d92-590f-44b1-8b93-a15b9f88b052): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.281324 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podUID="788f4d92-590f-44b1-8b93-a15b9f88b052" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.281383 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podUID="c15b97e5-3fe4-4f42-9501-b4c7c083bdbb" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.291133 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-v774d"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.296435 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.302092 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.306732 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.333376 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" event={"ID":"2487dbd3-ca49-4b26-99e3-2c858b549944","Type":"ContainerStarted","Data":"2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.335254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" event={"ID":"7f396cc2-4739-4401-9319-36881d4f449d","Type":"ContainerStarted","Data":"7be67aa680ba79c1b74dd25821f43e826dffdbb846c327129109e20b7f91af66"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.343569 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" event={"ID":"9134e6d2-b638-49be-9612-be12250e0a6d","Type":"ContainerStarted","Data":"6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.360793 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" event={"ID":"777d41f5-6e7f-4099-9f6f-aceaf0b972da","Type":"ContainerStarted","Data":"8bd7ed684f108dcf5a99e700affd35c3c80dade7051363d2b9848288b2063b1c"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.376572 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" event={"ID":"07393de3-4dbb-4de1-a7fc-49785a623de2","Type":"ContainerStarted","Data":"e6e0db1dcd1f9230375556cfa1847411cf2025f53695bc7e8ee5b66539a20a92"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.382920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" event={"ID":"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11","Type":"ContainerStarted","Data":"0389c19893a55c30c8198d9fcc9a37e00975ba89b9f1e565e2d6969926bdc40f"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.384467 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" event={"ID":"31481495-f181-449a-887e-ed58bf88c783","Type":"ContainerStarted","Data":"f3920d87c0013621cd7f4beb506abbc73eefda33aad2b3719f1075e00ee8cbca"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.385646 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" event={"ID":"39f45c61-20b7-4d98-98af-526018a240c1","Type":"ContainerStarted","Data":"36be41324561d5f7c7cc3a1f1f888e7f40e694d4cee80ac2e087a5e23901cc01"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.386386 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podUID="31481495-f181-449a-887e-ed58bf88c783" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.395417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" event={"ID":"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc","Type":"ContainerStarted","Data":"e9ee1225fae6ed01d9e7ce06f65effe310b78668cfb2f0f603c6514f7d2483db"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.398672 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" event={"ID":"cf2e278a-e0cb-4505-bd08-38c02155a632","Type":"ContainerStarted","Data":"9243f3624479c7f135541459a276bbd2f0984ed35dbb5738fcfa1d1ad390c85d"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.411433 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" event={"ID":"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb","Type":"ContainerStarted","Data":"e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.413676 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" event={"ID":"8893a935-e9c7-4d38-ae0c-17a94445475f","Type":"ContainerStarted","Data":"c75785069466c39270087e5589b4eab54cf4d089bf564b4fae7e9c9fcd62a0b2"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.413717 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podUID="c15b97e5-3fe4-4f42-9501-b4c7c083bdbb" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.417170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" event={"ID":"788f4d92-590f-44b1-8b93-a15b9f88b052","Type":"ContainerStarted","Data":"0a09e2a4ce740e64ad18e4734d7481e7ce3d91c3faf8386f5888144565151049"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.419386 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podUID="788f4d92-590f-44b1-8b93-a15b9f88b052" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.420993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" event={"ID":"bf959f71-8af9-4121-888f-13207cc2e1d0","Type":"ContainerStarted","Data":"067da97437fc8c4db88207b02e46078368819cde1916f92230da12f482ed30c0"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.422561 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podUID="bf959f71-8af9-4121-888f-13207cc2e1d0" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.423501 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" event={"ID":"9c8cf87b-4069-497d-9fcc-3b7be476ed4d","Type":"ContainerStarted","Data":"7205429c3d378bbd8b8fd00e2e77b72ee93087fe8a8cc1e66e97c5c681686793"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.425076 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" event={"ID":"82a19f5f-9a94-4b08-8795-22fce21897bf","Type":"ContainerStarted","Data":"84821abbda03f7cf84c7cc1354b2da6b877de962ea39b4e333f2061ce74f303a"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.427422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" event={"ID":"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7","Type":"ContainerStarted","Data":"898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.445699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" event={"ID":"baa9dff2-93f9-4590-a86d-cd891b4273f2","Type":"ContainerStarted","Data":"80ab1bd9d1ba102d454a9adac545f2464ffcc2dfdbea82c7a3182d562cceb443"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.447444 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podUID="baa9dff2-93f9-4590-a86d-cd891b4273f2" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.545308 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.545589 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.545662 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:00.545640267 +0000 UTC m=+1136.506887300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.646652 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.647369 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.647431 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:00.647410808 +0000 UTC m=+1136.608657841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:58:59 crc kubenswrapper[4979]: I0130 21:58:59.389719 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.389997 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.390160 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:03.39012302 +0000 UTC m=+1139.351370213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.479075 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podUID="baa9dff2-93f9-4590-a86d-cd891b4273f2" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.479279 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podUID="31481495-f181-449a-887e-ed58bf88c783" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.479341 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podUID="788f4d92-590f-44b1-8b93-a15b9f88b052" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.480090 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podUID="bf959f71-8af9-4121-888f-13207cc2e1d0" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.480165 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podUID="c15b97e5-3fe4-4f42-9501-b4c7c083bdbb" Jan 30 21:59:00 crc kubenswrapper[4979]: I0130 21:59:00.315376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.316058 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.316178 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:04.316158117 +0000 UTC m=+1140.277405140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: I0130 21:59:00.620249 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.620511 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.620606 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:04.620585469 +0000 UTC m=+1140.581832502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: I0130 21:59:00.722618 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.722800 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.722928 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:04.722902895 +0000 UTC m=+1140.684149928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.039495 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.040044 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.040115 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.041043 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.041114 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff" gracePeriod=600 Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.506643 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff" exitCode=0 Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.506696 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff"} Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.506737 4979 scope.go:117] "RemoveContainer" containerID="ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28" Jan 30 21:59:03 crc kubenswrapper[4979]: I0130 21:59:03.466558 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:03 crc kubenswrapper[4979]: E0130 21:59:03.466826 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:03 crc kubenswrapper[4979]: E0130 21:59:03.466932 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:11.466911335 +0000 UTC m=+1147.428158368 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: I0130 21:59:04.381206 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.381465 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.381773 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:12.381744721 +0000 UTC m=+1148.342991754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: I0130 21:59:04.687172 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.687343 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.687435 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:12.687414505 +0000 UTC m=+1148.648661538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: I0130 21:59:04.788672 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.788874 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.788956 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:12.78893087 +0000 UTC m=+1148.750177903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:59:10 crc kubenswrapper[4979]: E0130 21:59:10.990537 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c" Jan 30 21:59:10 crc kubenswrapper[4979]: E0130 21:59:10.991618 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8x6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7d96d95959-5s8xm_openstack-operators(7f396cc2-4739-4401-9319-36881d4f449d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:59:10 crc kubenswrapper[4979]: E0130 21:59:10.992816 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" podUID="7f396cc2-4739-4401-9319-36881d4f449d" Jan 30 21:59:11 crc kubenswrapper[4979]: I0130 21:59:11.506336 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:11 crc kubenswrapper[4979]: E0130 21:59:11.506624 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:11 crc kubenswrapper[4979]: E0130 21:59:11.506780 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:27.506742675 +0000 UTC m=+1163.467989758 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:11 crc kubenswrapper[4979]: E0130 21:59:11.619599 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" podUID="7f396cc2-4739-4401-9319-36881d4f449d" Jan 30 21:59:12 crc kubenswrapper[4979]: I0130 21:59:12.420130 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.420324 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.420399 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:28.420378097 +0000 UTC m=+1164.381625130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: I0130 21:59:12.726136 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.726500 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.726704 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:28.726595846 +0000 UTC m=+1164.687842919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: I0130 21:59:12.828822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.829156 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.829273 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:28.829239741 +0000 UTC m=+1164.790486814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.083827 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.084942 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8h6z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-6bb56_openstack-operators(777d41f5-6e7f-4099-9f6f-aceaf0b972da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.086123 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" podUID="777d41f5-6e7f-4099-9f6f-aceaf0b972da" Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.690960 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c"} Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.695874 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" event={"ID":"9c8cf87b-4069-497d-9fcc-3b7be476ed4d","Type":"ContainerStarted","Data":"a94eccfc7a3c43517234031c1637215147533ee44bb3c9a4aaf2284329686b25"} Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.695932 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.698208 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" podUID="777d41f5-6e7f-4099-9f6f-aceaf0b972da" Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.743353 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" podStartSLOduration=3.288672078 podStartE2EDuration="25.74332449s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.806166482 +0000 UTC m=+1133.767413515" lastFinishedPulling="2026-01-30 21:59:20.260818894 +0000 UTC m=+1156.222065927" observedRunningTime="2026-01-30 21:59:20.742088227 +0000 UTC m=+1156.703335260" watchObservedRunningTime="2026-01-30 21:59:20.74332449 +0000 UTC m=+1156.704571523" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.730663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" event={"ID":"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7","Type":"ContainerStarted","Data":"fafda2ccb236b70bec9150792062e9a0576972dc1d0ea3b11e870514b11ebbdc"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.732521 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.746775 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" event={"ID":"11771b88-abd2-436e-a95c-5113a5bae88b","Type":"ContainerStarted","Data":"148f670e2891d921d4b7bdc19541dc4ac3c7efae836d6d1127acbfe8c825f3bc"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.746947 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.782302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" event={"ID":"dcd08638-857d-40cd-a92c-b6dcef0bc329","Type":"ContainerStarted","Data":"734b7f7622002596a65f09e60c75ba4aa63a9e4eb02eb0c3a268d5bb0745989d"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.782871 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.803071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" event={"ID":"07393de3-4dbb-4de1-a7fc-49785a623de2","Type":"ContainerStarted","Data":"0a7f7cd6ea14a9f0accb348f49a38e95327b951156eada009517b3d8b5cca9a3"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.803414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.819427 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" podStartSLOduration=4.349214403 podStartE2EDuration="26.819404965s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.811462386 +0000 UTC m=+1133.772709419" lastFinishedPulling="2026-01-30 21:59:20.281652948 +0000 UTC m=+1156.242899981" observedRunningTime="2026-01-30 21:59:21.814714498 +0000 UTC m=+1157.775961531" watchObservedRunningTime="2026-01-30 21:59:21.819404965 +0000 UTC m=+1157.780651988" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.823339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" event={"ID":"8893a935-e9c7-4d38-ae0c-17a94445475f","Type":"ContainerStarted","Data":"ebd433217afa9acce4a52eeae57a99ec5deed2dc3f89bdbcc4b519c59df39d0b"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.824165 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.854779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" event={"ID":"39f45c61-20b7-4d98-98af-526018a240c1","Type":"ContainerStarted","Data":"81e67d28011955ad1ef4c8797df5606e2475e65b7ba2d9516c364c2e26aaab3a"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.855324 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.878784 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" podStartSLOduration=3.705911398 podStartE2EDuration="26.878758079s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.087950893 +0000 UTC m=+1133.049197926" lastFinishedPulling="2026-01-30 21:59:20.260797524 +0000 UTC m=+1156.222044607" observedRunningTime="2026-01-30 21:59:21.875494061 +0000 UTC m=+1157.836741094" watchObservedRunningTime="2026-01-30 21:59:21.878758079 +0000 UTC m=+1157.840005112" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.888200 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" event={"ID":"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc","Type":"ContainerStarted","Data":"5d822d11486e4199454ccb1a78bf6661761feec388c6b26beed488725d4f8fdb"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.888382 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.918739 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" event={"ID":"2487dbd3-ca49-4b26-99e3-2c858b549944","Type":"ContainerStarted","Data":"bbaf76139ef03473391138318100a70cef117e9012be4b7c59bb241f35603776"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.918835 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.921054 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" podStartSLOduration=3.698337055 podStartE2EDuration="26.921005832s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.038791234 +0000 UTC m=+1133.000038257" lastFinishedPulling="2026-01-30 21:59:20.261459991 +0000 UTC m=+1156.222707034" observedRunningTime="2026-01-30 21:59:21.913253872 +0000 UTC m=+1157.874500895" watchObservedRunningTime="2026-01-30 21:59:21.921005832 +0000 UTC m=+1157.882252865" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.932614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" event={"ID":"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11","Type":"ContainerStarted","Data":"a267f4c4f8adc8f092ba404528b45428db7ce12bd81c5ca3b75c5f46c15eb392"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.933601 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.940549 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" event={"ID":"9134e6d2-b638-49be-9612-be12250e0a6d","Type":"ContainerStarted","Data":"ae6aa0e568e734f58e8a2537e452a9844185645beeb04db2212a04d41daacf8f"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.941272 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.948372 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" event={"ID":"82a19f5f-9a94-4b08-8795-22fce21897bf","Type":"ContainerStarted","Data":"fd8c6207905112db605632c95166aa9c01f9a387c36d06a59cb86ff90aa82113"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.949238 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.983520 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" event={"ID":"cf2e278a-e0cb-4505-bd08-38c02155a632","Type":"ContainerStarted","Data":"ed4bcbc6a95d839bdc6a016f39a72c92c62cd8152ee8a9c1d5b73d2469dc0d51"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.983713 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.004804 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" podStartSLOduration=4.4694997149999995 podStartE2EDuration="27.004776746s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.747887187 +0000 UTC m=+1133.709134220" lastFinishedPulling="2026-01-30 21:59:20.283164218 +0000 UTC m=+1156.244411251" observedRunningTime="2026-01-30 21:59:22.002391173 +0000 UTC m=+1157.963638206" watchObservedRunningTime="2026-01-30 21:59:22.004776746 +0000 UTC m=+1157.966023779" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.005227 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" podStartSLOduration=4.614760744 podStartE2EDuration="27.005214829s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.892711683 +0000 UTC m=+1133.853958716" lastFinishedPulling="2026-01-30 21:59:20.283165768 +0000 UTC m=+1156.244412801" observedRunningTime="2026-01-30 21:59:21.973392868 +0000 UTC m=+1157.934639901" watchObservedRunningTime="2026-01-30 21:59:22.005214829 +0000 UTC m=+1157.966461862" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.047087 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" podStartSLOduration=4.652301209 podStartE2EDuration="27.047062901s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.866173415 +0000 UTC m=+1133.827420448" lastFinishedPulling="2026-01-30 21:59:20.260935097 +0000 UTC m=+1156.222182140" observedRunningTime="2026-01-30 21:59:22.043859843 +0000 UTC m=+1158.005106896" watchObservedRunningTime="2026-01-30 21:59:22.047062901 +0000 UTC m=+1158.008309934" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.082758 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" podStartSLOduration=4.189387632 podStartE2EDuration="27.082734705s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.37186351 +0000 UTC m=+1133.333110543" lastFinishedPulling="2026-01-30 21:59:20.265210583 +0000 UTC m=+1156.226457616" observedRunningTime="2026-01-30 21:59:22.082217551 +0000 UTC m=+1158.043464594" watchObservedRunningTime="2026-01-30 21:59:22.082734705 +0000 UTC m=+1158.043981738" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.115552 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" podStartSLOduration=5.070241238 podStartE2EDuration="27.115527751s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.235873151 +0000 UTC m=+1134.197120184" lastFinishedPulling="2026-01-30 21:59:20.281159664 +0000 UTC m=+1156.242406697" observedRunningTime="2026-01-30 21:59:22.112787928 +0000 UTC m=+1158.074034961" watchObservedRunningTime="2026-01-30 21:59:22.115527751 +0000 UTC m=+1158.076774784" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.155869 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" podStartSLOduration=4.644234039 podStartE2EDuration="27.155843381s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.771939997 +0000 UTC m=+1133.733187030" lastFinishedPulling="2026-01-30 21:59:20.283549299 +0000 UTC m=+1156.244796372" observedRunningTime="2026-01-30 21:59:22.141791041 +0000 UTC m=+1158.103038074" watchObservedRunningTime="2026-01-30 21:59:22.155843381 +0000 UTC m=+1158.117090414" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.165622 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" podStartSLOduration=4.077449303 podStartE2EDuration="26.165603195s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.193159376 +0000 UTC m=+1134.154406409" lastFinishedPulling="2026-01-30 21:59:20.281313248 +0000 UTC m=+1156.242560301" observedRunningTime="2026-01-30 21:59:22.165247786 +0000 UTC m=+1158.126494809" watchObservedRunningTime="2026-01-30 21:59:22.165603195 +0000 UTC m=+1158.126850218" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.205595 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" podStartSLOduration=5.089609302 podStartE2EDuration="27.205568406s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.162921559 +0000 UTC m=+1134.124168582" lastFinishedPulling="2026-01-30 21:59:20.278880653 +0000 UTC m=+1156.240127686" observedRunningTime="2026-01-30 21:59:22.198668489 +0000 UTC m=+1158.159915522" watchObservedRunningTime="2026-01-30 21:59:22.205568406 +0000 UTC m=+1158.166815439" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.231079 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" podStartSLOduration=5.12760936 podStartE2EDuration="27.231048405s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.191193983 +0000 UTC m=+1134.152441016" lastFinishedPulling="2026-01-30 21:59:20.294632988 +0000 UTC m=+1156.255880061" observedRunningTime="2026-01-30 21:59:22.224338123 +0000 UTC m=+1158.185585176" watchObservedRunningTime="2026-01-30 21:59:22.231048405 +0000 UTC m=+1158.192295438" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.506153 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.515471 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.545351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.841381 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.919712 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.991692 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.029899 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.410062 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.581201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.692848 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.716381 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.926839 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.181253 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.563527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.571824 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.756350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x2mf7" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.764636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.042926 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" event={"ID":"baa9dff2-93f9-4590-a86d-cd891b4273f2","Type":"ContainerStarted","Data":"81ae69f1f4042b3f0b32cf931dd77c71b9c8eaed6e942a169a08e55203d5c127"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.043639 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.044571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" event={"ID":"7f396cc2-4739-4401-9319-36881d4f449d","Type":"ContainerStarted","Data":"5e5fa102062b3213173b4b7028fc98f5078d10883255397a9069aa503c17f1ec"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.044878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.047659 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" event={"ID":"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb","Type":"ContainerStarted","Data":"075dd3f2e95c9837d79739c8021bfa7451815803fdc054bdcccf94a01e4c6eaa"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.047913 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.049158 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" event={"ID":"788f4d92-590f-44b1-8b93-a15b9f88b052","Type":"ContainerStarted","Data":"90753a5a0ae7bc9967f363c232d527fcd42792a4f40de2164c36c086150ba040"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.050825 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" event={"ID":"bf959f71-8af9-4121-888f-13207cc2e1d0","Type":"ContainerStarted","Data":"dd9829cf601b74c8c1136dabe4d47b911aaa6177c6e53144e68c6e727773c7aa"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.050997 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.054383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" event={"ID":"31481495-f181-449a-887e-ed58bf88c783","Type":"ContainerStarted","Data":"d4a637bece9ddf4ea4b7ae2fb88dd2c6108ec36660b8882056b88ad6c796eeab"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.054632 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.064973 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podStartSLOduration=3.016210617 podStartE2EDuration="32.064948597s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.26872418 +0000 UTC m=+1134.229971213" lastFinishedPulling="2026-01-30 21:59:27.31746216 +0000 UTC m=+1163.278709193" observedRunningTime="2026-01-30 21:59:28.062645645 +0000 UTC m=+1164.023892678" watchObservedRunningTime="2026-01-30 21:59:28.064948597 +0000 UTC m=+1164.026195620" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.113953 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podStartSLOduration=4.046386115 podStartE2EDuration="33.113931725s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.279225473 +0000 UTC m=+1134.240472506" lastFinishedPulling="2026-01-30 21:59:27.346771083 +0000 UTC m=+1163.308018116" observedRunningTime="2026-01-30 21:59:28.11081158 +0000 UTC m=+1164.072058623" watchObservedRunningTime="2026-01-30 21:59:28.113931725 +0000 UTC m=+1164.075178758" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.161447 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" podStartSLOduration=3.604626945 podStartE2EDuration="33.161415552s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.800096709 +0000 UTC m=+1133.761343742" lastFinishedPulling="2026-01-30 21:59:27.356885316 +0000 UTC m=+1163.318132349" observedRunningTime="2026-01-30 21:59:28.14427913 +0000 UTC m=+1164.105526173" watchObservedRunningTime="2026-01-30 21:59:28.161415552 +0000 UTC m=+1164.122662585" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.163684 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podStartSLOduration=4.092912536 podStartE2EDuration="33.163659892s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.275865243 +0000 UTC m=+1134.237112276" lastFinishedPulling="2026-01-30 21:59:27.346612589 +0000 UTC m=+1163.307859632" observedRunningTime="2026-01-30 21:59:28.161984877 +0000 UTC m=+1164.123231930" watchObservedRunningTime="2026-01-30 21:59:28.163659892 +0000 UTC m=+1164.124906925" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.219167 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podStartSLOduration=3.098054377 podStartE2EDuration="32.219145285s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.279394377 +0000 UTC m=+1134.240641410" lastFinishedPulling="2026-01-30 21:59:27.400485275 +0000 UTC m=+1163.361732318" observedRunningTime="2026-01-30 21:59:28.201297284 +0000 UTC m=+1164.162544327" watchObservedRunningTime="2026-01-30 21:59:28.219145285 +0000 UTC m=+1164.180392318" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.221502 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podStartSLOduration=4.111169011 podStartE2EDuration="33.221493277s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.245001018 +0000 UTC m=+1134.206248061" lastFinishedPulling="2026-01-30 21:59:27.355325294 +0000 UTC m=+1163.316572327" observedRunningTime="2026-01-30 21:59:28.218109537 +0000 UTC m=+1164.179356570" watchObservedRunningTime="2026-01-30 21:59:28.221493277 +0000 UTC m=+1164.182740310" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.276312 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9q469"] Jan 30 21:59:28 crc kubenswrapper[4979]: W0130 21:59:28.276524 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5966d922_4db9_40f7_baf1_5624f1a033d6.slice/crio-98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94 WatchSource:0}: Error finding container 98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94: Status 404 returned error can't find the container with id 98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94 Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.479508 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.487769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.624749 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2bfdd" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.633142 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.785315 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.790663 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.888615 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.893947 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.975864 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg"] Jan 30 21:59:28 crc kubenswrapper[4979]: W0130 21:59:28.998191 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9710f6a_7b47_4f62_bc11_9d5727fdb01f.slice/crio-45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093 WatchSource:0}: Error finding container 45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093: Status 404 returned error can't find the container with id 45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093 Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.003143 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k8hfr" Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.011019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.063790 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" event={"ID":"c9710f6a-7b47-4f62-bc11-9d5727fdb01f","Type":"ContainerStarted","Data":"45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093"} Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.108566 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" event={"ID":"5966d922-4db9-40f7-baf1-5624f1a033d6","Type":"ContainerStarted","Data":"98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94"} Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.453113 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92"] Jan 30 21:59:29 crc kubenswrapper[4979]: W0130 21:59:29.455799 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcea237e7_6ca9_4dcd_b5d6_d471898e2c09.slice/crio-50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035 WatchSource:0}: Error finding container 50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035: Status 404 returned error can't find the container with id 50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035 Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.090609 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" event={"ID":"cea237e7-6ca9-4dcd-b5d6-d471898e2c09","Type":"ContainerStarted","Data":"981ed0823f263ddcf3b979b191a75600f478958046fec895b8f46f170294d758"} Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.092600 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.092705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" event={"ID":"cea237e7-6ca9-4dcd-b5d6-d471898e2c09","Type":"ContainerStarted","Data":"50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035"} Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.123399 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" podStartSLOduration=34.12336823 podStartE2EDuration="34.12336823s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:59:30.11517011 +0000 UTC m=+1166.076417153" watchObservedRunningTime="2026-01-30 21:59:30.12336823 +0000 UTC m=+1166.084615263" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.108346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" event={"ID":"c9710f6a-7b47-4f62-bc11-9d5727fdb01f","Type":"ContainerStarted","Data":"cf549d8ae18c8281fb0564c07c6307feea503b665416aaf74852a0c2b9347940"} Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.110939 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.114147 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" event={"ID":"5966d922-4db9-40f7-baf1-5624f1a033d6","Type":"ContainerStarted","Data":"00752eaee374f18148c6894a09c1ab2a3e538f8d15c67e73b793ee5386be6282"} Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.114194 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.142455 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" podStartSLOduration=34.71633242 podStartE2EDuration="37.142424695s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:59:29.001346016 +0000 UTC m=+1164.962593049" lastFinishedPulling="2026-01-30 21:59:31.427438291 +0000 UTC m=+1167.388685324" observedRunningTime="2026-01-30 21:59:32.139464746 +0000 UTC m=+1168.100711779" watchObservedRunningTime="2026-01-30 21:59:32.142424695 +0000 UTC m=+1168.103671718" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.166742 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" podStartSLOduration=34.020483201 podStartE2EDuration="37.166711028s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:59:28.282061167 +0000 UTC m=+1164.243308200" lastFinishedPulling="2026-01-30 21:59:31.428288994 +0000 UTC m=+1167.389536027" observedRunningTime="2026-01-30 21:59:32.160897722 +0000 UTC m=+1168.122144755" watchObservedRunningTime="2026-01-30 21:59:32.166711028 +0000 UTC m=+1168.127958061" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.146212 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" event={"ID":"777d41f5-6e7f-4099-9f6f-aceaf0b972da","Type":"ContainerStarted","Data":"6024b8df2cdd89d7404fff4df4498f7f6f89c1c0042f7aa5b8515da1ec3974ab"} Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.147405 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.174529 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" podStartSLOduration=3.530367162 podStartE2EDuration="41.174497924s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.857559502 +0000 UTC m=+1133.818806535" lastFinishedPulling="2026-01-30 21:59:35.501690264 +0000 UTC m=+1171.462937297" observedRunningTime="2026-01-30 21:59:36.17025474 +0000 UTC m=+1172.131501773" watchObservedRunningTime="2026-01-30 21:59:36.174497924 +0000 UTC m=+1172.135744957" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.296716 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.656473 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.035483 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.093652 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.130516 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.773402 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:38 crc kubenswrapper[4979]: I0130 21:59:38.640589 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:39 crc kubenswrapper[4979]: I0130 21:59:39.021732 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:46 crc kubenswrapper[4979]: I0130 21:59:46.364012 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.508197 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.510759 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.515749 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.516083 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-dqnfn" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.516303 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.516468 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.536299 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.548166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.548247 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.600241 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.601730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.604456 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.607796 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650007 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650203 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650233 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650263 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.651468 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.671669 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.750990 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.751082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.751158 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.752773 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.753175 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.782266 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.837094 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.921840 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.185149 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.186531 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.197868 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.197893 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.204709 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.361201 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.361382 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.361593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.463324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.463533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.463628 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.465877 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.470219 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.488902 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.539788 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.572162 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.579352 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:00 crc kubenswrapper[4979]: W0130 22:00:00.582227 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7326bcf_bcff_43db_a33c_04f37fbd0ad6.slice/crio-1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f WatchSource:0}: Error finding container 1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f: Status 404 returned error can't find the container with id 1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f Jan 30 22:00:00 crc kubenswrapper[4979]: W0130 22:00:00.583803 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de8c822_a2be_442f_af66_2e2b1991b947.slice/crio-e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb WatchSource:0}: Error finding container e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb: Status 404 returned error can't find the container with id e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.587734 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.972398 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:00:00 crc kubenswrapper[4979]: W0130 22:00:00.984657 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod365cfffa_828e_4f0e_9903_4c1580e20c67.slice/crio-4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48 WatchSource:0}: Error finding container 4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48: Status 404 returned error can't find the container with id 4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48 Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.361854 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerStarted","Data":"63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.362425 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerStarted","Data":"4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.366444 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" event={"ID":"8de8c822-a2be-442f-af66-2e2b1991b947","Type":"ContainerStarted","Data":"e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.368131 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" event={"ID":"f7326bcf-bcff-43db-a33c-04f37fbd0ad6","Type":"ContainerStarted","Data":"1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.385433 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" podStartSLOduration=1.385364343 podStartE2EDuration="1.385364343s" podCreationTimestamp="2026-01-30 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:01.384735287 +0000 UTC m=+1197.345982330" watchObservedRunningTime="2026-01-30 22:00:01.385364343 +0000 UTC m=+1197.346611366" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.756687 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.789753 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.791218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.803292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.901634 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.901726 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.901780 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.003506 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.003564 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.003603 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.004648 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.005440 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.035052 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.127133 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.140638 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.165161 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.171691 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.183507 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.309064 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.309179 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.309206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.401538 4979 generic.go:334] "Generic (PLEG): container finished" podID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerID="63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68" exitCode=0 Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.401592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerDied","Data":"63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68"} Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.411616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.412604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.412816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.412845 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.413630 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.443804 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.518604 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.564557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.948007 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.950369 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954006 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954267 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954532 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954764 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gddkv" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.955017 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.961287 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.961562 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.982211 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.128945 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129720 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129738 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129926 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129952 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130337 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130487 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130541 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232643 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232715 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232755 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232787 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232874 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232907 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232950 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232993 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.234380 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.234545 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.234790 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.235357 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.236579 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.237061 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.242400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.245943 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.252780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.261652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.262517 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.287772 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.310264 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.317924 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.319770 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: W0130 22:00:03.323165 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f23a16_7799_4e68_a4f9_0a392a20d0ee.slice/crio-90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118 WatchSource:0}: Error finding container 90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118: Status 404 returned error can't find the container with id 90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118 Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.323441 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.323655 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.323860 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.324113 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.324353 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pvjzf" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.325537 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.327155 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.365815 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.439366 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" event={"ID":"32529c55-774e-471a-8d6e-9ff5ba02c047","Type":"ContainerStarted","Data":"5fdca6155ec6e9cd50c6b13a2673eed1789482db7ff5c92cf6aabdd6bdf2e4cc"} Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.442792 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerStarted","Data":"90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118"} Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443476 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443537 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443567 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443611 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443666 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443700 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443777 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443842 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443889 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443947 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546303 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546430 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546571 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546618 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546668 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.547649 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.547924 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.548300 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.570374 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.573780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.574446 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.579885 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.579937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.582260 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.583015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.585255 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.597398 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.604352 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.665776 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.813633 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.953778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"365cfffa-828e-4f0e-9903-4c1580e20c67\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.953857 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"365cfffa-828e-4f0e-9903-4c1580e20c67\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.954061 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"365cfffa-828e-4f0e-9903-4c1580e20c67\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.955041 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume" (OuterVolumeSpecName: "config-volume") pod "365cfffa-828e-4f0e-9903-4c1580e20c67" (UID: "365cfffa-828e-4f0e-9903-4c1580e20c67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.958502 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "365cfffa-828e-4f0e-9903-4c1580e20c67" (UID: "365cfffa-828e-4f0e-9903-4c1580e20c67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.963103 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c" (OuterVolumeSpecName: "kube-api-access-qql6c") pod "365cfffa-828e-4f0e-9903-4c1580e20c67" (UID: "365cfffa-828e-4f0e-9903-4c1580e20c67"). InnerVolumeSpecName "kube-api-access-qql6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.058121 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.058628 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.058639 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.134107 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: W0130 22:00:04.150786 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode28a1e34_b97c_4090_adf8_fa3e2b766365.slice/crio-07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1 WatchSource:0}: Error finding container 07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1: Status 404 returned error can't find the container with id 07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1 Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.242081 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: W0130 22:00:04.251291 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod981f1fee_4d2a_4d80_bf38_80557b6c5033.slice/crio-b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff WatchSource:0}: Error finding container b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff: Status 404 returned error can't find the container with id b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.457527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerStarted","Data":"b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff"} Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.460120 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.460092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerDied","Data":"4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48"} Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.460408 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.462550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerStarted","Data":"07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1"} Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.577659 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: E0130 22:00:04.578164 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerName="collect-profiles" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.578181 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerName="collect-profiles" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.578351 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerName="collect-profiles" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.581607 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586448 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4n4fr" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586561 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586723 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586861 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.590979 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.595004 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666119 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666310 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666329 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666353 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666377 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666397 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774026 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774156 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774547 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774661 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774717 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774806 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.777507 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.777807 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.779568 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.781815 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.781952 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.798862 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.802833 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.812179 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.882209 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.910817 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:00:05 crc kubenswrapper[4979]: I0130 22:00:05.283926 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:00:05 crc kubenswrapper[4979]: I0130 22:00:05.477476 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerStarted","Data":"78ea57414491f2323050c139427e26db676dbcbe77ee157ba12f1a06c2d26416"} Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.029482 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.032395 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.037386 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.037401 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.037554 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.039181 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wj9ck" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.063555 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100143 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100226 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100605 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100779 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100861 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.101055 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204296 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204363 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204392 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204572 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204679 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.205419 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.206184 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.206721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.206713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.207459 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.218666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.230315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.233333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.247146 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.371847 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.372112 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.373202 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.377274 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6xhn8" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.378139 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.390657 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.425133 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519067 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519155 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519222 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519244 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621659 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621716 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621740 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.622887 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.624053 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.660962 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.661796 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.673893 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.716542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:00:07 crc kubenswrapper[4979]: I0130 22:00:07.084584 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.330302 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.337476 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.348105 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.387513 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6b9fg" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.506438 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"kube-state-metrics-0\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.609014 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"kube-state-metrics-0\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.644355 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"kube-state-metrics-0\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.714107 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.339006 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.340407 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.347326 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.347407 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rp4wv" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.347615 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.354175 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.356204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.362066 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.375774 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464167 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464239 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464461 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464547 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464651 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464688 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464709 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464791 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.566867 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567270 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567592 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567672 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568149 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567978 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567711 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568109 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568277 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568306 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567816 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568565 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.570113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.571259 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.573351 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.573580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.588791 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.588976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.676891 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.697402 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.984256 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.986799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.990261 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.990558 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.991283 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n9pfk" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.991304 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.991476 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.003288 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178193 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178335 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178387 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178443 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178473 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178506 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178542 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280215 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280349 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280387 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280424 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280526 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280576 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280858 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.281022 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.281846 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.282674 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.290496 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.290750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.293225 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.305627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.307802 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.565089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerStarted","Data":"5c5282dd71d589822510ea8f2d38d385c993be6f5e42e4d1471904abd0c28e55"} Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.609160 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.453791 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.458283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461080 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461255 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w5vzq" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461217 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461686 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.463141 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650212 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650283 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650323 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650590 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.651008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.651196 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753456 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753628 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753678 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753698 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753731 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.754174 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.754831 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.755083 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.755713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.763439 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.768783 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.769428 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.772242 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.781048 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.832175 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.147394 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.149438 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqqfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(6795c6d5-6bb8-432f-b7ca-f29f33298093): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.150780 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.726711 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.489154 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.489537 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7qvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e28a1e34-b97c-4090-adf8-fa3e2b766365): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.490847 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.735084 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.284434 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.284671 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69j9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-2qnqn_openstack(32529c55-774e-471a-8d6e-9ff5ba02c047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.286625 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" podUID="32529c55-774e-471a-8d6e-9ff5ba02c047" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.392277 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.392925 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4s8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-27kgx_openstack(f7326bcf-bcff-43db-a33c-04f37fbd0ad6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.394272 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" podUID="f7326bcf-bcff-43db-a33c-04f37fbd0ad6" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.455680 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.455959 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nnpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hgv8v_openstack(8de8c822-a2be-442f-af66-2e2b1991b947): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.457096 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" podUID="8de8c822-a2be-442f-af66-2e2b1991b947" Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.742994 4979 generic.go:334] "Generic (PLEG): container finished" podID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" exitCode=0 Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.743391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerDied","Data":"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363"} Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.750826 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerStarted","Data":"92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031"} Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.916590 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.974325 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.980339 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 22:00:31 crc kubenswrapper[4979]: W0130 22:00:31.097155 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0b30c9_4972_4476_90e8_eec8d5d44ce5.slice/crio-96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265 WatchSource:0}: Error finding container 96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265: Status 404 returned error can't find the container with id 96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265 Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.101697 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.274284 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:00:31 crc kubenswrapper[4979]: E0130 22:00:31.374777 4979 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 22:00:31 crc kubenswrapper[4979]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 22:00:31 crc kubenswrapper[4979]: > podSandboxID="5fdca6155ec6e9cd50c6b13a2673eed1789482db7ff5c92cf6aabdd6bdf2e4cc" Jan 30 22:00:31 crc kubenswrapper[4979]: E0130 22:00:31.375026 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:00:31 crc kubenswrapper[4979]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69j9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-2qnqn_openstack(32529c55-774e-471a-8d6e-9ff5ba02c047): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 22:00:31 crc kubenswrapper[4979]: > logger="UnhandledError" Jan 30 22:00:31 crc kubenswrapper[4979]: E0130 22:00:31.376814 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" podUID="32529c55-774e-471a-8d6e-9ff5ba02c047" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.392146 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.761064 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerStarted","Data":"1ba7eb4e73d21b76aae2c54799684c5d1a7e13a849894846bd2ade424074662c"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.762707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerStarted","Data":"6de0f04b65ae33fad502fd47c75940202442c98e117caa698fb7adad6b0870b8"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.764374 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" event={"ID":"8de8c822-a2be-442f-af66-2e2b1991b947","Type":"ContainerDied","Data":"e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.764406 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.766766 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerStarted","Data":"bef9626e17c775699e3abae85cd19e88917b71194c8acdd56a70c42320faed2f"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.767811 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" event={"ID":"f7326bcf-bcff-43db-a33c-04f37fbd0ad6","Type":"ContainerDied","Data":"1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.767837 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.769065 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerStarted","Data":"073da3757392885be51de106d5a842ae9944cc19e4dc0f6b4686c2786716c716"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.771073 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerStarted","Data":"af076ee56d5886e64a296e55b03b5bb0ded8de489a95899c61270dac099f1dfe"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.772650 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerStarted","Data":"96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.934367 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.945839 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024304 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024532 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"8de8c822-a2be-442f-af66-2e2b1991b947\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024716 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"8de8c822-a2be-442f-af66-2e2b1991b947\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.025375 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7326bcf-bcff-43db-a33c-04f37fbd0ad6" (UID: "f7326bcf-bcff-43db-a33c-04f37fbd0ad6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.025438 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config" (OuterVolumeSpecName: "config") pod "8de8c822-a2be-442f-af66-2e2b1991b947" (UID: "8de8c822-a2be-442f-af66-2e2b1991b947"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.025863 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config" (OuterVolumeSpecName: "config") pod "f7326bcf-bcff-43db-a33c-04f37fbd0ad6" (UID: "f7326bcf-bcff-43db-a33c-04f37fbd0ad6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.032567 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033689 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033706 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033716 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033963 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n" (OuterVolumeSpecName: "kube-api-access-m4s8n") pod "f7326bcf-bcff-43db-a33c-04f37fbd0ad6" (UID: "f7326bcf-bcff-43db-a33c-04f37fbd0ad6"). InnerVolumeSpecName "kube-api-access-m4s8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.040338 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz" (OuterVolumeSpecName: "kube-api-access-8nnpz") pod "8de8c822-a2be-442f-af66-2e2b1991b947" (UID: "8de8c822-a2be-442f-af66-2e2b1991b947"). InnerVolumeSpecName "kube-api-access-8nnpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.136676 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.136719 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.781605 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerStarted","Data":"936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52"} Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784878 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerStarted","Data":"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503"} Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784933 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784966 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784909 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.848303 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" podStartSLOduration=3.770798627 podStartE2EDuration="30.848282199s" podCreationTimestamp="2026-01-30 22:00:02 +0000 UTC" firstStartedPulling="2026-01-30 22:00:03.32752626 +0000 UTC m=+1199.288773293" lastFinishedPulling="2026-01-30 22:00:30.405009822 +0000 UTC m=+1226.366256865" observedRunningTime="2026-01-30 22:00:32.842788071 +0000 UTC m=+1228.804035104" watchObservedRunningTime="2026-01-30 22:00:32.848282199 +0000 UTC m=+1228.809529232" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.885245 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.888160 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.923249 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.929066 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:33 crc kubenswrapper[4979]: I0130 22:00:33.080187 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de8c822-a2be-442f-af66-2e2b1991b947" path="/var/lib/kubelet/pods/8de8c822-a2be-442f-af66-2e2b1991b947/volumes" Jan 30 22:00:33 crc kubenswrapper[4979]: I0130 22:00:33.080682 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7326bcf-bcff-43db-a33c-04f37fbd0ad6" path="/var/lib/kubelet/pods/f7326bcf-bcff-43db-a33c-04f37fbd0ad6/volumes" Jan 30 22:00:36 crc kubenswrapper[4979]: I0130 22:00:36.839619 4979 generic.go:334] "Generic (PLEG): container finished" podID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerID="92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031" exitCode=0 Jan 30 22:00:36 crc kubenswrapper[4979]: I0130 22:00:36.839727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerDied","Data":"92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031"} Jan 30 22:00:37 crc kubenswrapper[4979]: I0130 22:00:37.566187 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:37 crc kubenswrapper[4979]: I0130 22:00:37.621205 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.081429 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.243586 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"32529c55-774e-471a-8d6e-9ff5ba02c047\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.243678 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"32529c55-774e-471a-8d6e-9ff5ba02c047\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.243725 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"32529c55-774e-471a-8d6e-9ff5ba02c047\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.255506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n" (OuterVolumeSpecName: "kube-api-access-69j9n") pod "32529c55-774e-471a-8d6e-9ff5ba02c047" (UID: "32529c55-774e-471a-8d6e-9ff5ba02c047"). InnerVolumeSpecName "kube-api-access-69j9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.276355 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32529c55-774e-471a-8d6e-9ff5ba02c047" (UID: "32529c55-774e-471a-8d6e-9ff5ba02c047"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.277529 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config" (OuterVolumeSpecName: "config") pod "32529c55-774e-471a-8d6e-9ff5ba02c047" (UID: "32529c55-774e-471a-8d6e-9ff5ba02c047"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.345642 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.345683 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.345694 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.865780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerStarted","Data":"364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e"} Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.868615 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" event={"ID":"32529c55-774e-471a-8d6e-9ff5ba02c047","Type":"ContainerDied","Data":"5fdca6155ec6e9cd50c6b13a2673eed1789482db7ff5c92cf6aabdd6bdf2e4cc"} Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.868903 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.928358 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.938791 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.080402 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32529c55-774e-471a-8d6e-9ff5ba02c047" path="/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volumes" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.882522 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerStarted","Data":"b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.886644 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerStarted","Data":"11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.886754 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.888588 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" exitCode=0 Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.888653 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.891815 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerStarted","Data":"20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.892212 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.894780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerStarted","Data":"2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.894911 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.899634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerStarted","Data":"e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.909320 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.719469816 podStartE2EDuration="35.909301869s" podCreationTimestamp="2026-01-30 22:00:04 +0000 UTC" firstStartedPulling="2026-01-30 22:00:12.213556345 +0000 UTC m=+1208.174803388" lastFinishedPulling="2026-01-30 22:00:30.403388408 +0000 UTC m=+1226.364635441" observedRunningTime="2026-01-30 22:00:39.904617242 +0000 UTC m=+1235.865864275" watchObservedRunningTime="2026-01-30 22:00:39.909301869 +0000 UTC m=+1235.870548902" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.927575 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.709089034 podStartE2EDuration="31.92754754s" podCreationTimestamp="2026-01-30 22:00:08 +0000 UTC" firstStartedPulling="2026-01-30 22:00:30.903642756 +0000 UTC m=+1226.864889789" lastFinishedPulling="2026-01-30 22:00:39.122101262 +0000 UTC m=+1235.083348295" observedRunningTime="2026-01-30 22:00:39.922242417 +0000 UTC m=+1235.883489450" watchObservedRunningTime="2026-01-30 22:00:39.92754754 +0000 UTC m=+1235.888794573" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.970276 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.661802394 podStartE2EDuration="33.970250608s" podCreationTimestamp="2026-01-30 22:00:06 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.072944431 +0000 UTC m=+1227.034191464" lastFinishedPulling="2026-01-30 22:00:37.381392645 +0000 UTC m=+1233.342639678" observedRunningTime="2026-01-30 22:00:39.965720606 +0000 UTC m=+1235.926967639" watchObservedRunningTime="2026-01-30 22:00:39.970250608 +0000 UTC m=+1235.931497641" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.911495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerStarted","Data":"9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.913953 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerStarted","Data":"ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.916715 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerStarted","Data":"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.916749 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerStarted","Data":"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.917129 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.932385 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.209070092 podStartE2EDuration="30.932353669s" podCreationTimestamp="2026-01-30 22:00:10 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.47293228 +0000 UTC m=+1227.434179313" lastFinishedPulling="2026-01-30 22:00:40.196215857 +0000 UTC m=+1236.157462890" observedRunningTime="2026-01-30 22:00:40.92791188 +0000 UTC m=+1236.889158913" watchObservedRunningTime="2026-01-30 22:00:40.932353669 +0000 UTC m=+1236.893600702" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.934785 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kxk8g" podStartSLOduration=23.143787828 podStartE2EDuration="29.934768785s" podCreationTimestamp="2026-01-30 22:00:11 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.103378398 +0000 UTC m=+1227.064625421" lastFinishedPulling="2026-01-30 22:00:37.894359345 +0000 UTC m=+1233.855606378" observedRunningTime="2026-01-30 22:00:39.989842915 +0000 UTC m=+1235.951089978" watchObservedRunningTime="2026-01-30 22:00:40.934768785 +0000 UTC m=+1236.896015818" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.955622 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.245349099 podStartE2EDuration="26.955592645s" podCreationTimestamp="2026-01-30 22:00:14 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.482194369 +0000 UTC m=+1227.443441422" lastFinishedPulling="2026-01-30 22:00:40.192437935 +0000 UTC m=+1236.153684968" observedRunningTime="2026-01-30 22:00:40.95242342 +0000 UTC m=+1236.913670463" watchObservedRunningTime="2026-01-30 22:00:40.955592645 +0000 UTC m=+1236.916839708" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.981411 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tmjt2" podStartSLOduration=23.974858476 podStartE2EDuration="29.981381928s" podCreationTimestamp="2026-01-30 22:00:11 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.374829571 +0000 UTC m=+1227.336076604" lastFinishedPulling="2026-01-30 22:00:37.381353023 +0000 UTC m=+1233.342600056" observedRunningTime="2026-01-30 22:00:40.981251065 +0000 UTC m=+1236.942498118" watchObservedRunningTime="2026-01-30 22:00:40.981381928 +0000 UTC m=+1236.942628951" Jan 30 22:00:41 crc kubenswrapper[4979]: I0130 22:00:41.698402 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.610509 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.612559 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.653391 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.832685 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.882523 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: E0130 22:00:42.886518 4979 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.143:39584->38.102.83.143:38353: write tcp 38.102.83.143:39584->38.102.83.143:38353: write: broken pipe Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.932919 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:43 crc kubenswrapper[4979]: I0130 22:00:43.945106 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerStarted","Data":"c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2"} Jan 30 22:00:43 crc kubenswrapper[4979]: I0130 22:00:43.998046 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.255709 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.264320 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.268486 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.291578 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.321195 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.322284 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.329427 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.346516 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371066 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371135 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371160 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371192 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371214 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371264 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371312 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371337 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371371 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.473544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475215 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475329 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475436 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475487 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.477160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.478794 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.479869 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.480277 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.480333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.480446 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.482146 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.483321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.503997 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:44 crc kubenswrapper[4979]: E0130 22:00:44.505204 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-tnm5g], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" podUID="dbccd103-4e22-4fd6-a5ad-fc996b992328" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.510929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.511596 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.527007 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.528542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.533125 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.545201 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578756 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578843 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578874 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578987 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.579347 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.652930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680799 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680856 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680927 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680993 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.681782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.682102 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.682115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.683226 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.766487 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.891825 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.953881 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.969149 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.086742 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087028 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087115 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087196 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087482 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config" (OuterVolumeSpecName: "config") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087827 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087854 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.088065 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.088092 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.097175 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g" (OuterVolumeSpecName: "kube-api-access-tnm5g") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "kube-api-access-tnm5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.190533 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.190574 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.242098 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.567305 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:45 crc kubenswrapper[4979]: W0130 22:00:45.576403 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b4df8c_21a1_4acb_b209_643ded266729.slice/crio-4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff WatchSource:0}: Error finding container 4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff: Status 404 returned error can't find the container with id 4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.972337 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerStarted","Data":"4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff"} Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.974372 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.977411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerStarted","Data":"bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff"} Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.977500 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerStarted","Data":"4155908da65ed980762b6600d6cd531e31e34d1e8a5cf0688a19ba647961bebc"} Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.003320 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lz8zj" podStartSLOduration=2.003300784 podStartE2EDuration="2.003300784s" podCreationTimestamp="2026-01-30 22:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:45.9975533 +0000 UTC m=+1241.958800333" watchObservedRunningTime="2026-01-30 22:00:46.003300784 +0000 UTC m=+1241.964547807" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.055887 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.061674 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.372983 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.375853 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.506769 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.718213 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.986613 4979 generic.go:334] "Generic (PLEG): container finished" podID="34b4df8c-21a1-4acb-b209-643ded266729" containerID="38f94d44f88fb380f22ffff6e87982c6b5afa5689e2945b28203090cec0d6de2" exitCode=0 Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.986729 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerDied","Data":"38f94d44f88fb380f22ffff6e87982c6b5afa5689e2945b28203090cec0d6de2"} Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.989528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerStarted","Data":"d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902"} Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.086016 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbccd103-4e22-4fd6-a5ad-fc996b992328" path="/var/lib/kubelet/pods/dbccd103-4e22-4fd6-a5ad-fc996b992328/volumes" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.108521 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.658156 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.872006 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.875053 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879580 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879626 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879881 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879968 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xt5jr" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.915924 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955254 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955342 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955376 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955514 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955566 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955816 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.008914 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerStarted","Data":"77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e"} Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.009730 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.031704 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podStartSLOduration=4.031686681 podStartE2EDuration="4.031686681s" podCreationTimestamp="2026-01-30 22:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:48.029714087 +0000 UTC m=+1243.990961120" watchObservedRunningTime="2026-01-30 22:00:48.031686681 +0000 UTC m=+1243.992933714" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.057841 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058025 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058191 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058247 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.059332 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.059709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.060652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.067327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.165883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.166951 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.167245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.207746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.738473 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.781533 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.838080 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.839864 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.910125 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.919516 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.980653 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.983644 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.983959 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.984084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.984200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.031136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerStarted","Data":"2bd740bd191cb301e1ace5a3abcf92c5ccb570c941fcbb8171a41eb9fdac51bb"} Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086227 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086419 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086537 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.087434 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.088246 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.092410 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.095775 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.122078 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.211534 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.755751 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.969408 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.976312 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980052 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980237 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980328 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980546 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-c8jn4" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.988628 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.042730 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerStarted","Data":"9e5bb560297f4e0e8f2115f8c48331514e53ce9d31d3b53377b9d219de77d2e7"} Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.043335 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" containerID="cri-o://77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e" gracePeriod=10 Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110615 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110746 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110876 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110957 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213861 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213962 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213981 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214047 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.214045 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.214288 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.214349 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:50.714330586 +0000 UTC m=+1246.675577619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214500 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214080 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214934 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.215208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.220192 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.244403 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.248121 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.474862 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.476407 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.479410 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.479846 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.480640 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.492895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.542253 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.543426 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-tzk4z ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-t6khq" podUID="a06ee3da-092d-42ff-a8f5-b06a6e9022a7" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.559366 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.561116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.566907 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.623279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.623657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.623817 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624506 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624582 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624827 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727522 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727672 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727761 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.727796 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727826 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.727830 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.727990 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:51.727970554 +0000 UTC m=+1247.689217587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728299 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728446 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728561 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728729 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728799 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728899 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729136 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729222 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729313 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729802 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.730193 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.733613 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.733851 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.747477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.751928 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831357 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831451 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831523 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831722 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831777 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.832004 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.832580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.832813 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.839567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.841547 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.842069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.862204 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.879317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.055675 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.124318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238289 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238349 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238414 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238487 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238651 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238694 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238741 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.240264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.240697 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts" (OuterVolumeSpecName: "scripts") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.240913 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.243782 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.243856 4979 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.243874 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.246172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z" (OuterVolumeSpecName: "kube-api-access-tzk4z") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "kube-api-access-tzk4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.265898 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.267277 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.267431 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.287102 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345557 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345631 4979 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345642 4979 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345653 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.752717 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:51 crc kubenswrapper[4979]: E0130 22:00:51.753350 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:51 crc kubenswrapper[4979]: E0130 22:00:51.753367 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:51 crc kubenswrapper[4979]: E0130 22:00:51.753418 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:53.753402379 +0000 UTC m=+1249.714649412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.806746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.807580 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerStarted","Data":"1cded23ff5ee2d2e3497c55f604788871e1bcd1e4e1acb05a7084523b596fe7e"} Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.887993 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.894842 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.080310 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06ee3da-092d-42ff-a8f5-b06a6e9022a7" path="/var/lib/kubelet/pods/a06ee3da-092d-42ff-a8f5-b06a6e9022a7/volumes" Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.804058 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:53 crc kubenswrapper[4979]: E0130 22:00:53.804366 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:53 crc kubenswrapper[4979]: E0130 22:00:53.804635 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:53 crc kubenswrapper[4979]: E0130 22:00:53.804748 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:57.804723402 +0000 UTC m=+1253.765970425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.820980 4979 generic.go:334] "Generic (PLEG): container finished" podID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerID="c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2" exitCode=0 Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.821304 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerDied","Data":"c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2"} Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.826859 4979 generic.go:334] "Generic (PLEG): container finished" podID="34b4df8c-21a1-4acb-b209-643ded266729" containerID="77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e" exitCode=0 Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.826909 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerDied","Data":"77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e"} Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.117298 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.124870 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.131572 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.137119 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.240883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.241134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.342997 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.343153 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.344079 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.363685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.452337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.863344 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerStarted","Data":"62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962"} Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.894131 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371983.96067 podStartE2EDuration="52.894106108s" podCreationTimestamp="2026-01-30 22:00:03 +0000 UTC" firstStartedPulling="2026-01-30 22:00:05.305997284 +0000 UTC m=+1201.267244317" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:55.890647316 +0000 UTC m=+1251.851894389" watchObservedRunningTime="2026-01-30 22:00:55.894106108 +0000 UTC m=+1251.855353141" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.966152 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:00:55 crc kubenswrapper[4979]: W0130 22:00:55.967847 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee1e511_fa3d_4c0f_b03b_c0608b253006.slice/crio-c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642 WatchSource:0}: Error finding container c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642: Status 404 returned error can't find the container with id c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642 Jan 30 22:00:56 crc kubenswrapper[4979]: I0130 22:00:56.875747 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerStarted","Data":"c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642"} Jan 30 22:00:57 crc kubenswrapper[4979]: I0130 22:00:57.806626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:57 crc kubenswrapper[4979]: E0130 22:00:57.806842 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:57 crc kubenswrapper[4979]: E0130 22:00:57.806865 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:57 crc kubenswrapper[4979]: E0130 22:00:57.806950 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:01:05.806931865 +0000 UTC m=+1261.768178908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:59 crc kubenswrapper[4979]: I0130 22:00:59.894065 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 30 22:01:00 crc kubenswrapper[4979]: I0130 22:01:00.930800 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerStarted","Data":"92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb"} Jan 30 22:01:03 crc kubenswrapper[4979]: I0130 22:01:03.991212 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6g89l" podStartSLOduration=8.991178188 podStartE2EDuration="8.991178188s" podCreationTimestamp="2026-01-30 22:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:03.988595938 +0000 UTC m=+1259.949843031" watchObservedRunningTime="2026-01-30 22:01:03.991178188 +0000 UTC m=+1259.952425221" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.896213 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.911294 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.911398 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.985546 4979 generic.go:334] "Generic (PLEG): container finished" podID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerID="936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52" exitCode=0 Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.985611 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerDied","Data":"936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52"} Jan 30 22:01:05 crc kubenswrapper[4979]: I0130 22:01:05.903334 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:05 crc kubenswrapper[4979]: E0130 22:01:05.903663 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:01:05 crc kubenswrapper[4979]: E0130 22:01:05.904257 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:01:05 crc kubenswrapper[4979]: E0130 22:01:05.904353 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:01:21.904323904 +0000 UTC m=+1277.865570937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:01:07 crc kubenswrapper[4979]: E0130 22:01:07.341750 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1776585578/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Jan 30 22:01:07 crc kubenswrapper[4979]: E0130 22:01:07.342624 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n694h687h59fh57h659h5fdh5f4h647h575h596h547h55ch666h565h577h57bh87h65ch67dh5f5h586h5c6h67ch5f9h697h595hb9h5b6h5b7h5cbh68fh664q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n5b6h645h58h5c9h5bbh676h676h68ch64bh95h587h699hddh647h585h8chd6h5f7h549h5cfh59bhddh657h6bhb8h699h8bh545h5fch8fh656hf9q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n5bch84h686h57dh5c4h5c4h555h5c5h58ch68dh74h59h558h5dfh549h8h8fh644h5ddh688h79h658h68bh668h669hbfh555h5d5hf8h5c9h8dh58dq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:nd9h65fh6fh56h565h64ch78h7ch59dh67dh555h96h688h674h594hbdh5f4h65bh5cfh55hc6hc6hf8h65h58fh67fhc8h9ch586h66ch54dhcbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dn9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(e7cc7cf6-3592-4e25-9578-27ae56d6909b): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1776585578/1\": happened during read: context canceled" logger="UnhandledError" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.444646 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538240 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538327 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538383 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538509 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.544551 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc" (OuterVolumeSpecName: "kube-api-access-jcvtc") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "kube-api-access-jcvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.578506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.579743 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config" (OuterVolumeSpecName: "config") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.580213 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.580731 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641208 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641248 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641264 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641276 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641288 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.020656 4979 generic.go:334] "Generic (PLEG): container finished" podID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerID="34481ae8a2678ceccfab661611d1800a7d06957c7a2f8615105c54e98d7da90e" exitCode=0 Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.020764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerDied","Data":"34481ae8a2678ceccfab661611d1800a7d06957c7a2f8615105c54e98d7da90e"} Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.024289 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerDied","Data":"4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff"} Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.024378 4979 scope.go:117] "RemoveContainer" containerID="77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.024308 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.026742 4979 generic.go:334] "Generic (PLEG): container finished" podID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerID="92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb" exitCode=0 Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.026823 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerDied","Data":"92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb"} Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.088195 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.098855 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:01:08 crc kubenswrapper[4979]: E0130 22:01:08.950862 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" Jan 30 22:01:08 crc kubenswrapper[4979]: E0130 22:01:08.951131 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:0c9dfe78-7cd7-434e-8308-095f6953ebb6,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4d6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-qf69d_openstack(29c6531f-d97f-4f39-95bd-4c2b8a75779f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.953155 4979 scope.go:117] "RemoveContainer" containerID="38f94d44f88fb380f22ffff6e87982c6b5afa5689e2945b28203090cec0d6de2" Jan 30 22:01:08 crc kubenswrapper[4979]: E0130 22:01:08.953149 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-qf69d" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" Jan 30 22:01:09 crc kubenswrapper[4979]: E0130 22:01:09.045424 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified\\\"\"" pod="openstack/swift-ring-rebalance-qf69d" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.090308 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4df8c-21a1-4acb-b209-643ded266729" path="/var/lib/kubelet/pods/34b4df8c-21a1-4acb-b209-643ded266729/volumes" Jan 30 22:01:09 crc kubenswrapper[4979]: E0130 22:01:09.388501 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1776585578/1\\\": happened during read: context canceled\"" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.427634 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.599625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.599907 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.600998 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ee1e511-fa3d-4c0f-b03b-c0608b253006" (UID: "8ee1e511-fa3d-4c0f-b03b-c0608b253006"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.606327 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq" (OuterVolumeSpecName: "kube-api-access-m2dmq") pod "8ee1e511-fa3d-4c0f-b03b-c0608b253006" (UID: "8ee1e511-fa3d-4c0f-b03b-c0608b253006"). InnerVolumeSpecName "kube-api-access-m2dmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.702821 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.703333 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.896747 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.055314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerStarted","Data":"32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2"} Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.055733 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.062545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerStarted","Data":"80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1"} Jan 30 22:01:10 crc kubenswrapper[4979]: E0130 22:01:10.065429 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.068756 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerDied","Data":"c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642"} Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.068797 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.068813 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.078508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerStarted","Data":"11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b"} Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.080021 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.115280 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.003202002 podStartE2EDuration="1m9.115253852s" podCreationTimestamp="2026-01-30 22:00:01 +0000 UTC" firstStartedPulling="2026-01-30 22:00:04.254484537 +0000 UTC m=+1200.215731570" lastFinishedPulling="2026-01-30 22:00:30.366536377 +0000 UTC m=+1226.327783420" observedRunningTime="2026-01-30 22:01:10.087911107 +0000 UTC m=+1266.049158200" watchObservedRunningTime="2026-01-30 22:01:10.115253852 +0000 UTC m=+1266.076500905" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.137561 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-t86qb" podStartSLOduration=22.137534031 podStartE2EDuration="22.137534031s" podCreationTimestamp="2026-01-30 22:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:10.135655061 +0000 UTC m=+1266.096902104" watchObservedRunningTime="2026-01-30 22:01:10.137534031 +0000 UTC m=+1266.098781064" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.012278 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 22:01:11 crc kubenswrapper[4979]: E0130 22:01:11.089253 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.103348 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.723237 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 22:01:11 crc kubenswrapper[4979]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 22:01:11 crc kubenswrapper[4979]: > Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.753130 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.754564 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112089 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:12 crc kubenswrapper[4979]: E0130 22:01:12.112642 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="init" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112659 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="init" Jan 30 22:01:12 crc kubenswrapper[4979]: E0130 22:01:12.112679 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerName="mariadb-account-create-update" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112690 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerName="mariadb-account-create-update" Jan 30 22:01:12 crc kubenswrapper[4979]: E0130 22:01:12.112708 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112717 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112938 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112954 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerName="mariadb-account-create-update" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.115758 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.118395 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.124332 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259444 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259574 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259706 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259825 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259961 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.361891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.361980 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362021 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362217 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362273 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362420 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362411 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362476 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.363178 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.364398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.389863 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.436856 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.832687 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.103640 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g-config-vrffk" event={"ID":"4d465425-7b56-4a09-8c9f-91888b8097f9","Type":"ContainerStarted","Data":"3cff4c21528190bac2f5805403dd35c95c1b670810f5a9a916e00292b42d081e"} Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.546188 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.554593 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.602088 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.603479 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.611221 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.612254 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.692099 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.692178 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.794153 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.795078 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.796868 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.820398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.919478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.127107 4979 generic.go:334] "Generic (PLEG): container finished" podID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerID="80e1c8de2f5d2def08241e9e838d6caa9d9317d6bfc0e4390d83af93615634c1" exitCode=0 Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.127194 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g-config-vrffk" event={"ID":"4d465425-7b56-4a09-8c9f-91888b8097f9","Type":"ContainerDied","Data":"80e1c8de2f5d2def08241e9e838d6caa9d9317d6bfc0e4390d83af93615634c1"} Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.217194 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.284279 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.284553 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" containerID="cri-o://33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" gracePeriod=10 Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.443310 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.756279 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.817730 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.817854 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.817949 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.827332 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q" (OuterVolumeSpecName: "kube-api-access-75t5q") pod "16f23a16-7799-4e68-a4f9-0a392a20d0ee" (UID: "16f23a16-7799-4e68-a4f9-0a392a20d0ee"). InnerVolumeSpecName "kube-api-access-75t5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.871047 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config" (OuterVolumeSpecName: "config") pod "16f23a16-7799-4e68-a4f9-0a392a20d0ee" (UID: "16f23a16-7799-4e68-a4f9-0a392a20d0ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.874587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16f23a16-7799-4e68-a4f9-0a392a20d0ee" (UID: "16f23a16-7799-4e68-a4f9-0a392a20d0ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.924737 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.925027 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.925163 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.082640 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" path="/var/lib/kubelet/pods/8ee1e511-fa3d-4c0f-b03b-c0608b253006/volumes" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140667 4979 generic.go:334] "Generic (PLEG): container finished" podID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" exitCode=0 Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140752 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerDied","Data":"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerDied","Data":"90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140868 4979 scope.go:117] "RemoveContainer" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.144552 4979 generic.go:334] "Generic (PLEG): container finished" podID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerID="0d4dc8128d54521f9ca5effeeca0076315899d8799e67ef62bddd57c385893e0" exitCode=0 Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.144846 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hplgk" event={"ID":"6bd0719b-952d-4080-a685-ce90c1c3bf93","Type":"ContainerDied","Data":"0d4dc8128d54521f9ca5effeeca0076315899d8799e67ef62bddd57c385893e0"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.144896 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hplgk" event={"ID":"6bd0719b-952d-4080-a685-ce90c1c3bf93","Type":"ContainerStarted","Data":"a0fc3aa14643ab8338851ee1a2c5bec0bc555e85843e53791bb00ed3c540ea43"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.185858 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.187324 4979 scope.go:117] "RemoveContainer" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.197361 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.226203 4979 scope.go:117] "RemoveContainer" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" Jan 30 22:01:15 crc kubenswrapper[4979]: E0130 22:01:15.227010 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503\": container with ID starting with 33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503 not found: ID does not exist" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.227100 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503"} err="failed to get container status \"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503\": rpc error: code = NotFound desc = could not find container \"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503\": container with ID starting with 33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503 not found: ID does not exist" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.227176 4979 scope.go:117] "RemoveContainer" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" Jan 30 22:01:15 crc kubenswrapper[4979]: E0130 22:01:15.228732 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363\": container with ID starting with a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363 not found: ID does not exist" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.228817 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363"} err="failed to get container status \"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363\": rpc error: code = NotFound desc = could not find container \"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363\": container with ID starting with a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363 not found: ID does not exist" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.544451 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.647777 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.647865 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.647993 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.648050 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.648091 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.648207 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650179 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run" (OuterVolumeSpecName: "var-run") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650309 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650334 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.651253 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts" (OuterVolumeSpecName: "scripts") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.664425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4" (OuterVolumeSpecName: "kube-api-access-rdgj4") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "kube-api-access-rdgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750182 4979 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750224 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750238 4979 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750251 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750264 4979 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750310 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.157567 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g-config-vrffk" event={"ID":"4d465425-7b56-4a09-8c9f-91888b8097f9","Type":"ContainerDied","Data":"3cff4c21528190bac2f5805403dd35c95c1b670810f5a9a916e00292b42d081e"} Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.157635 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cff4c21528190bac2f5805403dd35c95c1b670810f5a9a916e00292b42d081e" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.157632 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.320660 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.321959 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerName="ovn-config" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.321981 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerName="ovn-config" Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.322049 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="init" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322059 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="init" Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.322111 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322669 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerName="ovn-config" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322702 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.323834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.344511 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.439045 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.445753 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.451446 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.466402 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.466639 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.458145 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.525462 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.568839 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.568962 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.569014 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.569063 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.570229 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.608835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.650118 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.657339 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.665202 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671213 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"6bd0719b-952d-4080-a685-ce90c1c3bf93\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671357 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"6bd0719b-952d-4080-a685-ce90c1c3bf93\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671842 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671902 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.672703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bd0719b-952d-4080-a685-ce90c1c3bf93" (UID: "6bd0719b-952d-4080-a685-ce90c1c3bf93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.672778 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.675195 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp" (OuterVolumeSpecName: "kube-api-access-pn5kp") pod "6bd0719b-952d-4080-a685-ce90c1c3bf93" (UID: "6bd0719b-952d-4080-a685-ce90c1c3bf93"). InnerVolumeSpecName "kube-api-access-pn5kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.706460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.717959 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.718581 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerName="mariadb-account-create-update" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.718605 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerName="mariadb-account-create-update" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.718866 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerName="mariadb-account-create-update" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.719709 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.732451 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.733764 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.751602 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.754585 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.773751 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.773807 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.789937 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kxk8g" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.800101 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.836788 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875704 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875816 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875953 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875983 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.927876 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.933874 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.951957 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.977868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.977982 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.978020 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.978126 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.979315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.979946 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.034673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.040640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.044688 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.052937 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.054665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.055899 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.060898 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.062713 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.081629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.081737 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.086542 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" path="/var/lib/kubelet/pods/16f23a16-7799-4e68-a4f9-0a392a20d0ee/volumes" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.087404 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" path="/var/lib/kubelet/pods/4d465425-7b56-4a09-8c9f-91888b8097f9/volumes" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.171988 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hplgk" event={"ID":"6bd0719b-952d-4080-a685-ce90c1c3bf93","Type":"ContainerDied","Data":"a0fc3aa14643ab8338851ee1a2c5bec0bc555e85843e53791bb00ed3c540ea43"} Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.172049 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fc3aa14643ab8338851ee1a2c5bec0bc555e85843e53791bb00ed3c540ea43" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.172149 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187018 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187305 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187371 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.188758 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.211010 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.233351 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.288865 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.289075 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.289688 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.309681 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.396515 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.426644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.510318 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:01:17 crc kubenswrapper[4979]: W0130 22:01:17.520594 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81fec9c6_beaa_4731_b527_51284f88fb92.slice/crio-d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b WatchSource:0}: Error finding container d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b: Status 404 returned error can't find the container with id d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.612384 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.683675 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:01:17 crc kubenswrapper[4979]: W0130 22:01:17.694417 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3dfb7c0_8bfc_47f8_bd7d_11fa49469326.slice/crio-fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c WatchSource:0}: Error finding container fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c: Status 404 returned error can't find the container with id fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.937918 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:01:17 crc kubenswrapper[4979]: W0130 22:01:17.948355 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0187b79_63c8_4f13_af19_892e8c9b36f9.slice/crio-138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932 WatchSource:0}: Error finding container 138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932: Status 404 returned error can't find the container with id 138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932 Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.003804 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:01:18 crc kubenswrapper[4979]: W0130 22:01:18.011899 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83840d8c_fe62_449c_a3ab_5404215dce87.slice/crio-4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f WatchSource:0}: Error finding container 4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f: Status 404 returned error can't find the container with id 4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.191460 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gds8v" event={"ID":"83840d8c-fe62-449c-a3ab-5404215dce87","Type":"ContainerStarted","Data":"4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.193156 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerStarted","Data":"e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.193197 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerStarted","Data":"fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.195389 4979 generic.go:334] "Generic (PLEG): container finished" podID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerID="5b349812d2a4fb80dba197720305dc0e90cd12df7c5b2836dc61787bdf46e880" exitCode=0 Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.195460 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zct57" event={"ID":"4320dd9b-0e3c-474b-bb1a-e00a72ae2938","Type":"ContainerDied","Data":"5b349812d2a4fb80dba197720305dc0e90cd12df7c5b2836dc61787bdf46e880"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.195492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zct57" event={"ID":"4320dd9b-0e3c-474b-bb1a-e00a72ae2938","Type":"ContainerStarted","Data":"0dd19a0eaa6d35cccc61aae1cab273a967511b6ad16907005ffdc3ec7b0a3d9f"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.198592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerStarted","Data":"a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.198629 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerStarted","Data":"4d97788c279e351ba877ef75288ac4a26b1ff285ae90d5d47ad933b5c4cdbcba"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.201108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerStarted","Data":"d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.201149 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerStarted","Data":"d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.202918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerStarted","Data":"138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.215518 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-0121-account-create-update-k277d" podStartSLOduration=2.215495738 podStartE2EDuration="2.215495738s" podCreationTimestamp="2026-01-30 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:18.214903712 +0000 UTC m=+1274.176150745" watchObservedRunningTime="2026-01-30 22:01:18.215495738 +0000 UTC m=+1274.176742771" Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.237346 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bb3f-account-create-update-dc7fc" podStartSLOduration=2.237321965 podStartE2EDuration="2.237321965s" podCreationTimestamp="2026-01-30 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:18.233880633 +0000 UTC m=+1274.195127676" watchObservedRunningTime="2026-01-30 22:01:18.237321965 +0000 UTC m=+1274.198568998" Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.259129 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-krqxx" podStartSLOduration=2.259097112 podStartE2EDuration="2.259097112s" podCreationTimestamp="2026-01-30 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:18.252687219 +0000 UTC m=+1274.213934252" watchObservedRunningTime="2026-01-30 22:01:18.259097112 +0000 UTC m=+1274.220344145" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.213950 4979 generic.go:334] "Generic (PLEG): container finished" podID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerID="d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902" exitCode=0 Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.214136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerDied","Data":"d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.222201 4979 generic.go:334] "Generic (PLEG): container finished" podID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerID="a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a" exitCode=0 Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.222274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerDied","Data":"a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.224622 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerStarted","Data":"ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.226616 4979 generic.go:334] "Generic (PLEG): container finished" podID="83840d8c-fe62-449c-a3ab-5404215dce87" containerID="c2e6fa2e1a73e8bf62b5ee3edf154e0d34b174fdf34335916ed3037f6db0258e" exitCode=0 Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.226788 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gds8v" event={"ID":"83840d8c-fe62-449c-a3ab-5404215dce87","Type":"ContainerDied","Data":"c2e6fa2e1a73e8bf62b5ee3edf154e0d34b174fdf34335916ed3037f6db0258e"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.291482 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b6e4-account-create-update-kc2rf" podStartSLOduration=2.2914562529999998 podStartE2EDuration="2.291456253s" podCreationTimestamp="2026-01-30 22:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:19.284903237 +0000 UTC m=+1275.246150270" watchObservedRunningTime="2026-01-30 22:01:19.291456253 +0000 UTC m=+1275.252703286" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.643190 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.767797 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.768359 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.769795 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4320dd9b-0e3c-474b-bb1a-e00a72ae2938" (UID: "4320dd9b-0e3c-474b-bb1a-e00a72ae2938"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.775345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r" (OuterVolumeSpecName: "kube-api-access-7w68r") pod "4320dd9b-0e3c-474b-bb1a-e00a72ae2938" (UID: "4320dd9b-0e3c-474b-bb1a-e00a72ae2938"). InnerVolumeSpecName "kube-api-access-7w68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.870984 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.871317 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.222831 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.239200 4979 generic.go:334] "Generic (PLEG): container finished" podID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerID="e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c" exitCode=0 Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.239291 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerDied","Data":"e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.241539 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zct57" event={"ID":"4320dd9b-0e3c-474b-bb1a-e00a72ae2938","Type":"ContainerDied","Data":"0dd19a0eaa6d35cccc61aae1cab273a967511b6ad16907005ffdc3ec7b0a3d9f"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.241575 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd19a0eaa6d35cccc61aae1cab273a967511b6ad16907005ffdc3ec7b0a3d9f" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.241548 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.246913 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.279892 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerStarted","Data":"eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.280383 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.282129 4979 generic.go:334] "Generic (PLEG): container finished" podID="81fec9c6-beaa-4731-b527-51284f88fb92" containerID="d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316" exitCode=0 Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.282237 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerDied","Data":"d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.295780 4979 generic.go:334] "Generic (PLEG): container finished" podID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerID="ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186" exitCode=0 Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.296380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerDied","Data":"ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.314704 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.540094 podStartE2EDuration="1m18.314680919s" podCreationTimestamp="2026-01-30 22:00:02 +0000 UTC" firstStartedPulling="2026-01-30 22:00:04.154835645 +0000 UTC m=+1200.116082678" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:20.306796917 +0000 UTC m=+1276.268043950" watchObservedRunningTime="2026-01-30 22:01:20.314680919 +0000 UTC m=+1276.275927952" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.645077 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.785775 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.790377 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"11b3f71c-0345-4261-8d0c-e7d700eb2932\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.790710 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"11b3f71c-0345-4261-8d0c-e7d700eb2932\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.792304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11b3f71c-0345-4261-8d0c-e7d700eb2932" (UID: "11b3f71c-0345-4261-8d0c-e7d700eb2932"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.794833 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4" (OuterVolumeSpecName: "kube-api-access-dcml4") pod "11b3f71c-0345-4261-8d0c-e7d700eb2932" (UID: "11b3f71c-0345-4261-8d0c-e7d700eb2932"). InnerVolumeSpecName "kube-api-access-dcml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893218 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"83840d8c-fe62-449c-a3ab-5404215dce87\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893372 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"83840d8c-fe62-449c-a3ab-5404215dce87\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893795 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893812 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893881 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83840d8c-fe62-449c-a3ab-5404215dce87" (UID: "83840d8c-fe62-449c-a3ab-5404215dce87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.897043 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl" (OuterVolumeSpecName: "kube-api-access-j5cxl") pod "83840d8c-fe62-449c-a3ab-5404215dce87" (UID: "83840d8c-fe62-449c-a3ab-5404215dce87"). InnerVolumeSpecName "kube-api-access-j5cxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.995443 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.995483 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.083053 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" path="/var/lib/kubelet/pods/6bd0719b-952d-4080-a685-ce90c1c3bf93/volumes" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.306985 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.306966 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerDied","Data":"4d97788c279e351ba877ef75288ac4a26b1ff285ae90d5d47ad933b5c4cdbcba"} Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.307079 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d97788c279e351ba877ef75288ac4a26b1ff285ae90d5d47ad933b5c4cdbcba" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.310018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gds8v" event={"ID":"83840d8c-fe62-449c-a3ab-5404215dce87","Type":"ContainerDied","Data":"4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f"} Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.310099 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.310316 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.667205 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.790718 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.795883 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.809261 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"81fec9c6-beaa-4731-b527-51284f88fb92\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.809418 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"81fec9c6-beaa-4731-b527-51284f88fb92\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.810454 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81fec9c6-beaa-4731-b527-51284f88fb92" (UID: "81fec9c6-beaa-4731-b527-51284f88fb92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.819665 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772" (OuterVolumeSpecName: "kube-api-access-p8772") pod "81fec9c6-beaa-4731-b527-51284f88fb92" (UID: "81fec9c6-beaa-4731-b527-51284f88fb92"). InnerVolumeSpecName "kube-api-access-p8772". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.911642 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"e0187b79-63c8-4f13-af19-892e8c9b36f9\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.911751 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"e0187b79-63c8-4f13-af19-892e8c9b36f9\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.911846 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912062 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912271 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0187b79-63c8-4f13-af19-892e8c9b36f9" (UID: "e0187b79-63c8-4f13-af19-892e8c9b36f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912579 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" (UID: "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912654 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912743 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912787 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912806 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912824 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: E0130 22:01:21.912759 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:01:21 crc kubenswrapper[4979]: E0130 22:01:21.912868 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:01:21 crc kubenswrapper[4979]: E0130 22:01:21.912947 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:01:53.912921713 +0000 UTC m=+1309.874168787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.915127 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs" (OuterVolumeSpecName: "kube-api-access-748xs") pod "e0187b79-63c8-4f13-af19-892e8c9b36f9" (UID: "e0187b79-63c8-4f13-af19-892e8c9b36f9"). InnerVolumeSpecName "kube-api-access-748xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.917248 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq" (OuterVolumeSpecName: "kube-api-access-vfnkq") pod "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" (UID: "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326"). InnerVolumeSpecName "kube-api-access-vfnkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.014956 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.015020 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.321431 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.321430 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerDied","Data":"fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c"} Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.321572 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.324167 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerDied","Data":"d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b"} Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.324229 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.324185 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.326554 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerDied","Data":"138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932"} Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.326593 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.326596 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:23 crc kubenswrapper[4979]: I0130 22:01:23.340831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerStarted","Data":"3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb"} Jan 30 22:01:23 crc kubenswrapper[4979]: I0130 22:01:23.369658 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qf69d" podStartSLOduration=2.495207574 podStartE2EDuration="33.369636691s" podCreationTimestamp="2026-01-30 22:00:50 +0000 UTC" firstStartedPulling="2026-01-30 22:00:51.286365144 +0000 UTC m=+1247.247612177" lastFinishedPulling="2026-01-30 22:01:22.160794261 +0000 UTC m=+1278.122041294" observedRunningTime="2026-01-30 22:01:23.366718103 +0000 UTC m=+1279.327965156" watchObservedRunningTime="2026-01-30 22:01:23.369636691 +0000 UTC m=+1279.330883754" Jan 30 22:01:23 crc kubenswrapper[4979]: I0130 22:01:23.603180 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.256068 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258161 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258237 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258356 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258409 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258481 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258533 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258599 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258657 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258730 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258788 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258863 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258923 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259151 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259218 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259280 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259371 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259434 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259495 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.260137 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.263923 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.305117 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.394559 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.394760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.496653 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.496822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.497801 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.516903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.632254 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:26 crc kubenswrapper[4979]: I0130 22:01:26.206199 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:01:26 crc kubenswrapper[4979]: W0130 22:01:26.211018 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod206c6cff_9f21_42be_b4d9_ebab3cb4ead8.slice/crio-97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569 WatchSource:0}: Error finding container 97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569: Status 404 returned error can't find the container with id 97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569 Jan 30 22:01:26 crc kubenswrapper[4979]: I0130 22:01:26.392076 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerStarted","Data":"97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569"} Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.308651 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.310517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.312736 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tzvjb" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.313402 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.330809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.401549 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerStarted","Data":"aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805"} Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.428449 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kkrz5" podStartSLOduration=2.428411257 podStartE2EDuration="2.428411257s" podCreationTimestamp="2026-01-30 22:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:27.418543361 +0000 UTC m=+1283.379790394" watchObservedRunningTime="2026-01-30 22:01:27.428411257 +0000 UTC m=+1283.389658290" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.455893 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.455988 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.456050 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.456204 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558666 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558763 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.570759 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.579140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.580698 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.593467 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.634767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:28 crc kubenswrapper[4979]: I0130 22:01:28.425257 4979 generic.go:334] "Generic (PLEG): container finished" podID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerID="aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805" exitCode=0 Jan 30 22:01:28 crc kubenswrapper[4979]: I0130 22:01:28.425393 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerDied","Data":"aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805"} Jan 30 22:01:28 crc kubenswrapper[4979]: I0130 22:01:28.571785 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:01:28 crc kubenswrapper[4979]: W0130 22:01:28.925667 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod023efd8e_7f0d_4ac5_80b3_db30dbb25905.slice/crio-55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71 WatchSource:0}: Error finding container 55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71: Status 404 returned error can't find the container with id 55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71 Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.438577 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerStarted","Data":"e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6"} Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.439260 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.440769 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerStarted","Data":"55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71"} Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.478209 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.387463111 podStartE2EDuration="42.478190018s" podCreationTimestamp="2026-01-30 22:00:47 +0000 UTC" firstStartedPulling="2026-01-30 22:00:48.894724037 +0000 UTC m=+1244.855971060" lastFinishedPulling="2026-01-30 22:01:28.985450924 +0000 UTC m=+1284.946697967" observedRunningTime="2026-01-30 22:01:29.470714377 +0000 UTC m=+1285.431961410" watchObservedRunningTime="2026-01-30 22:01:29.478190018 +0000 UTC m=+1285.439437051" Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.969412 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.117309 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.117492 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.118099 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "206c6cff-9f21-42be-b4d9-ebab3cb4ead8" (UID: "206c6cff-9f21-42be-b4d9-ebab3cb4ead8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.136561 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh" (OuterVolumeSpecName: "kube-api-access-666vh") pod "206c6cff-9f21-42be-b4d9-ebab3cb4ead8" (UID: "206c6cff-9f21-42be-b4d9-ebab3cb4ead8"). InnerVolumeSpecName "kube-api-access-666vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.220786 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.220835 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.455128 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerDied","Data":"97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569"} Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.455160 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.455176 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569" Jan 30 22:01:31 crc kubenswrapper[4979]: I0130 22:01:31.467117 4979 generic.go:334] "Generic (PLEG): container finished" podID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerID="3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb" exitCode=0 Jan 30 22:01:31 crc kubenswrapper[4979]: I0130 22:01:31.467171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerDied","Data":"3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb"} Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.041629 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.041704 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.834277 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914593 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914661 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914682 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914708 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914846 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.917924 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.918380 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.924458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z" (OuterVolumeSpecName: "kube-api-access-g4d6z") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "kube-api-access-g4d6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.930907 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.948902 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts" (OuterVolumeSpecName: "scripts") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.958325 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.958457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016776 4979 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016826 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016843 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016855 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016870 4979 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016933 4979 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016967 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.492305 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerDied","Data":"1cded23ff5ee2d2e3497c55f604788871e1bcd1e4e1acb05a7084523b596fe7e"} Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.492380 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cded23ff5ee2d2e3497c55f604788871e1bcd1e4e1acb05a7084523b596fe7e" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.492421 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.674290 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.222473 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:01:34 crc kubenswrapper[4979]: E0130 22:01:34.222883 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerName="swift-ring-rebalance" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.222899 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerName="swift-ring-rebalance" Jan 30 22:01:34 crc kubenswrapper[4979]: E0130 22:01:34.222920 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerName="mariadb-account-create-update" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.222930 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerName="mariadb-account-create-update" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.223109 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerName="mariadb-account-create-update" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.223132 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerName="swift-ring-rebalance" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.223720 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.237379 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.347270 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.347337 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.439626 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.441316 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.444796 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.448621 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.448853 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.450003 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.451405 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.481327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.526438 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.527690 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.547362 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.550808 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.551086 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.606834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.623539 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.624836 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.629150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.647083 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655312 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655478 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655523 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.658506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.680624 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.753330 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.754753 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757221 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757282 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757320 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757485 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.758305 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.758757 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.758888 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.759296 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.759597 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.760875 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.770045 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.771275 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.775295 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.785245 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.821581 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.822103 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.850413 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858641 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858689 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858719 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858916 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.859062 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.859101 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.859207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.860683 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.904466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.904536 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.906929 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.940547 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.962801 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.962906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963234 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963339 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963384 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963550 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.964232 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.000506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.000828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.012828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.026288 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.058682 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.065587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.065961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.070031 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.096699 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.124694 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.164940 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.235806 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.332646 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:01:35 crc kubenswrapper[4979]: W0130 22:01:35.380584 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2df91e7_6710_4ee4_a671_4b19dc5c2798.slice/crio-50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659 WatchSource:0}: Error finding container 50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659: Status 404 returned error can't find the container with id 50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659 Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.601258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mvqgx" event={"ID":"a2df91e7-6710-4ee4-a671-4b19dc5c2798","Type":"ContainerStarted","Data":"50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659"} Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.642899 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.751766 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:01:35 crc kubenswrapper[4979]: W0130 22:01:35.760268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79a4cbbe_93e4_414e_9ca3_2cd182d6ed96.slice/crio-6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be WatchSource:0}: Error finding container 6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be: Status 404 returned error can't find the container with id 6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.852366 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.922187 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.965840 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:01:36 crc kubenswrapper[4979]: W0130 22:01:36.148686 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dd39b08_adf2_44da_b301_8e8694590426.slice/crio-cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27 WatchSource:0}: Error finding container cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27: Status 404 returned error can't find the container with id cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27 Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.149864 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.616611 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-95kjb" event={"ID":"175f02fa-3089-4350-a658-c939f6e6ef9f","Type":"ContainerDied","Data":"e944b74595e093897d5163f1d6f5e2841d79cfe7a27b236506370f93704312ba"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.616714 4979 generic.go:334] "Generic (PLEG): container finished" podID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerID="e944b74595e093897d5163f1d6f5e2841d79cfe7a27b236506370f93704312ba" exitCode=0 Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.617380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-95kjb" event={"ID":"175f02fa-3089-4350-a658-c939f6e6ef9f","Type":"ContainerStarted","Data":"e28eadd933e61c8cf81e7798598f35cc0b2d5d5bba932062fca900134c507514"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.625523 4979 generic.go:334] "Generic (PLEG): container finished" podID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerID="79ca49dab9783f66a2ceb714d9fa0a2f61e36e1771efaec7c095de2ed5249a25" exitCode=0 Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.625597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mvqgx" event={"ID":"a2df91e7-6710-4ee4-a671-4b19dc5c2798","Type":"ContainerDied","Data":"79ca49dab9783f66a2ceb714d9fa0a2f61e36e1771efaec7c095de2ed5249a25"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.638766 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerStarted","Data":"60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.638848 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerStarted","Data":"6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.654831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerStarted","Data":"046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.654893 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerStarted","Data":"ea00611de73705c35473a924d4f7c549482419a2f78ecfeaf84b3d1d727771aa"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.657728 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerStarted","Data":"e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.657784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerStarted","Data":"cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.667522 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerStarted","Data":"bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.667586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerStarted","Data":"503e7d3ba39535a71f67070c35c2d482f374ad3f2b694d7668c84e006b975dc6"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.669466 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerStarted","Data":"b66b3c202ff49e3b9a37dcd38590680dac6fdd11f7dbfdf69a3e9361cda17e7e"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.698954 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-svtcv" podStartSLOduration=2.698921516 podStartE2EDuration="2.698921516s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.681312981 +0000 UTC m=+1292.642560014" watchObservedRunningTime="2026-01-30 22:01:36.698921516 +0000 UTC m=+1292.660168549" Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.703402 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-18a2-account-create-update-xznvc" podStartSLOduration=2.703380525 podStartE2EDuration="2.703380525s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.701164775 +0000 UTC m=+1292.662411808" watchObservedRunningTime="2026-01-30 22:01:36.703380525 +0000 UTC m=+1292.664627558" Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.732554 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d511-account-create-update-jtbft" podStartSLOduration=2.73252403 podStartE2EDuration="2.73252403s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.719487248 +0000 UTC m=+1292.680734291" watchObservedRunningTime="2026-01-30 22:01:36.73252403 +0000 UTC m=+1292.693771063" Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.808383 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5880-account-create-update-nvk6p" podStartSLOduration=2.808357629 podStartE2EDuration="2.808357629s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.753324268 +0000 UTC m=+1292.714571301" watchObservedRunningTime="2026-01-30 22:01:36.808357629 +0000 UTC m=+1292.769604662" Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.689929 4979 generic.go:334] "Generic (PLEG): container finished" podID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerID="60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.690066 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerDied","Data":"60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58"} Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.693764 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerID="046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.693813 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerDied","Data":"046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702"} Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.696682 4979 generic.go:334] "Generic (PLEG): container finished" podID="6dd39b08-adf2-44da-b301-8e8694590426" containerID="e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.696724 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerDied","Data":"e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8"} Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.700156 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerID="bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.700270 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerDied","Data":"bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13"} Jan 30 22:01:48 crc kubenswrapper[4979]: I0130 22:01:48.273768 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.668412 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.669050 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jxks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9zrqq_openstack(023efd8e-7f0d-4ac5-80b3-db30dbb25905): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.670757 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9zrqq" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.734716 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.738835 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.747535 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.795495 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.797206 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.808183 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.808399 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.809512 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc3a0116-2f4a-4dde-bf99-56759f4349bc" (UID: "bc3a0116-2f4a-4dde-bf99-56759f4349bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.809968 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"175f02fa-3089-4350-a658-c939f6e6ef9f\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"175f02fa-3089-4350-a658-c939f6e6ef9f\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810043 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810075 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810620 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.811479 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "175f02fa-3089-4350-a658-c939f6e6ef9f" (UID: "175f02fa-3089-4350-a658-c939f6e6ef9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.811592 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8b67e98-62a7-4a61-835e-8b7ec20167f3" (UID: "f8b67e98-62a7-4a61-835e-8b7ec20167f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.817496 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.843198 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr" (OuterVolumeSpecName: "kube-api-access-jbrtr") pod "bc3a0116-2f4a-4dde-bf99-56759f4349bc" (UID: "bc3a0116-2f4a-4dde-bf99-56759f4349bc"). InnerVolumeSpecName "kube-api-access-jbrtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.844291 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm" (OuterVolumeSpecName: "kube-api-access-4v5bm") pod "f8b67e98-62a7-4a61-835e-8b7ec20167f3" (UID: "f8b67e98-62a7-4a61-835e-8b7ec20167f3"). InnerVolumeSpecName "kube-api-access-4v5bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.847383 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5" (OuterVolumeSpecName: "kube-api-access-dgmk5") pod "175f02fa-3089-4350-a658-c939f6e6ef9f" (UID: "175f02fa-3089-4350-a658-c939f6e6ef9f"). InnerVolumeSpecName "kube-api-access-dgmk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.856575 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerDied","Data":"cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.856623 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.856732 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.863094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerDied","Data":"503e7d3ba39535a71f67070c35c2d482f374ad3f2b694d7668c84e006b975dc6"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.863394 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503e7d3ba39535a71f67070c35c2d482f374ad3f2b694d7668c84e006b975dc6" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.863528 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.873547 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.873698 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-95kjb" event={"ID":"175f02fa-3089-4350-a658-c939f6e6ef9f","Type":"ContainerDied","Data":"e28eadd933e61c8cf81e7798598f35cc0b2d5d5bba932062fca900134c507514"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.874289 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28eadd933e61c8cf81e7798598f35cc0b2d5d5bba932062fca900134c507514" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.878313 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mvqgx" event={"ID":"a2df91e7-6710-4ee4-a671-4b19dc5c2798","Type":"ContainerDied","Data":"50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.878351 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.878444 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.881671 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerDied","Data":"6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.881746 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.881701 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.885195 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.886342 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerDied","Data":"ea00611de73705c35473a924d4f7c549482419a2f78ecfeaf84b3d1d727771aa"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.886449 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea00611de73705c35473a924d4f7c549482419a2f78ecfeaf84b3d1d727771aa" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.886459 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-9zrqq" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913230 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913312 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"6dd39b08-adf2-44da-b301-8e8694590426\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913404 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913482 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913524 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913587 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"6dd39b08-adf2-44da-b301-8e8694590426\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2df91e7-6710-4ee4-a671-4b19dc5c2798" (UID: "a2df91e7-6710-4ee4-a671-4b19dc5c2798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914014 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914051 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914061 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914076 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914086 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914096 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914793 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dd39b08-adf2-44da-b301-8e8694590426" (UID: "6dd39b08-adf2-44da-b301-8e8694590426"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.915321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" (UID: "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.933341 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4" (OuterVolumeSpecName: "kube-api-access-hlkh4") pod "a2df91e7-6710-4ee4-a671-4b19dc5c2798" (UID: "a2df91e7-6710-4ee4-a671-4b19dc5c2798"). InnerVolumeSpecName "kube-api-access-hlkh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.942906 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb" (OuterVolumeSpecName: "kube-api-access-nx2nb") pod "6dd39b08-adf2-44da-b301-8e8694590426" (UID: "6dd39b08-adf2-44da-b301-8e8694590426"). InnerVolumeSpecName "kube-api-access-nx2nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.943459 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq" (OuterVolumeSpecName: "kube-api-access-7f6xq") pod "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" (UID: "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96"). InnerVolumeSpecName "kube-api-access-7f6xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016172 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016236 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016263 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016283 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016302 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.245087 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.245990 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgdvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-tj4gc_openstack(fac7007d-8147-477c-a42e-2463290030ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.250172 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-tj4gc" podUID="fac7007d-8147-477c-a42e-2463290030ff" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.921522 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-tj4gc" podUID="fac7007d-8147-477c-a42e-2463290030ff" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.001880 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.010769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.201167 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.841365 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.949439 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"b5f19eb16c0b9ad8d89d2db8aaef61e8a41afec6d53e30023f1498d447572ee3"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.969607 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.970414 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.970431 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.970444 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d"} Jan 30 22:02:02 crc kubenswrapper[4979]: I0130 22:02:02.039408 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:02:02 crc kubenswrapper[4979]: I0130 22:02:02.040110 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:04 crc kubenswrapper[4979]: I0130 22:02:04.055207 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601"} Jan 30 22:02:04 crc kubenswrapper[4979]: I0130 22:02:04.057179 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3"} Jan 30 22:02:04 crc kubenswrapper[4979]: I0130 22:02:04.057218 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759"} Jan 30 22:02:05 crc kubenswrapper[4979]: I0130 22:02:05.082854 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551"} Jan 30 22:02:05 crc kubenswrapper[4979]: I0130 22:02:05.083293 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerStarted","Data":"2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92"} Jan 30 22:02:05 crc kubenswrapper[4979]: I0130 22:02:05.111190 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9zrqq" podStartSLOduration=3.5459591169999998 podStartE2EDuration="38.111166151s" podCreationTimestamp="2026-01-30 22:01:27 +0000 UTC" firstStartedPulling="2026-01-30 22:01:28.982162165 +0000 UTC m=+1284.943409198" lastFinishedPulling="2026-01-30 22:02:03.547369199 +0000 UTC m=+1319.508616232" observedRunningTime="2026-01-30 22:02:05.102070699 +0000 UTC m=+1321.063317732" watchObservedRunningTime="2026-01-30 22:02:05.111166151 +0000 UTC m=+1321.072413184" Jan 30 22:02:07 crc kubenswrapper[4979]: I0130 22:02:07.112413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56"} Jan 30 22:02:07 crc kubenswrapper[4979]: I0130 22:02:07.113276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.125177 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerStarted","Data":"8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135600 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135631 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.153785 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tj4gc" podStartSLOduration=2.59324292 podStartE2EDuration="34.153758646s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="2026-01-30 22:01:35.985966176 +0000 UTC m=+1291.947213209" lastFinishedPulling="2026-01-30 22:02:07.546481902 +0000 UTC m=+1323.507728935" observedRunningTime="2026-01-30 22:02:08.142987729 +0000 UTC m=+1324.104234772" watchObservedRunningTime="2026-01-30 22:02:08.153758646 +0000 UTC m=+1324.115005689" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.162556 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3"} Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.209413 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=69.546146983 podStartE2EDuration="1m21.209391829s" podCreationTimestamp="2026-01-30 22:00:48 +0000 UTC" firstStartedPulling="2026-01-30 22:01:54.852498804 +0000 UTC m=+1310.813745847" lastFinishedPulling="2026-01-30 22:02:06.51574366 +0000 UTC m=+1322.476990693" observedRunningTime="2026-01-30 22:02:09.201960742 +0000 UTC m=+1325.163207795" watchObservedRunningTime="2026-01-30 22:02:09.209391829 +0000 UTC m=+1325.170638862" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525423 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525849 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525873 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525888 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525894 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525907 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525914 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525926 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525931 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525955 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525963 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525984 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd39b08-adf2-44da-b301-8e8694590426" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525992 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd39b08-adf2-44da-b301-8e8694590426" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526163 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526177 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd39b08-adf2-44da-b301-8e8694590426" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526188 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526203 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526212 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526223 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.527330 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.530076 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.544643 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587641 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587876 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.590601 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.590873 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.692663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693528 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693568 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693604 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693689 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.694123 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.694780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.694929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.695011 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.695110 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.716980 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.891466 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:10 crc kubenswrapper[4979]: I0130 22:02:10.363525 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:10 crc kubenswrapper[4979]: W0130 22:02:10.368219 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f00645b_b1f2_447f_b5a0_b38147768d8f.slice/crio-b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce WatchSource:0}: Error finding container b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce: Status 404 returned error can't find the container with id b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce Jan 30 22:02:11 crc kubenswrapper[4979]: I0130 22:02:11.182318 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerID="50c11c6ba1f573a9bebf130bbdbf73d94684ce5c18c6c0476848fec8b87a100e" exitCode=0 Jan 30 22:02:11 crc kubenswrapper[4979]: I0130 22:02:11.182404 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerDied","Data":"50c11c6ba1f573a9bebf130bbdbf73d94684ce5c18c6c0476848fec8b87a100e"} Jan 30 22:02:11 crc kubenswrapper[4979]: I0130 22:02:11.182720 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerStarted","Data":"b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce"} Jan 30 22:02:12 crc kubenswrapper[4979]: I0130 22:02:12.194734 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerStarted","Data":"c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9"} Jan 30 22:02:12 crc kubenswrapper[4979]: I0130 22:02:12.195590 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:12 crc kubenswrapper[4979]: I0130 22:02:12.228957 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" podStartSLOduration=3.228924631 podStartE2EDuration="3.228924631s" podCreationTimestamp="2026-01-30 22:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:12.215610166 +0000 UTC m=+1328.176857219" watchObservedRunningTime="2026-01-30 22:02:12.228924631 +0000 UTC m=+1328.190171684" Jan 30 22:02:13 crc kubenswrapper[4979]: I0130 22:02:13.209691 4979 generic.go:334] "Generic (PLEG): container finished" podID="fac7007d-8147-477c-a42e-2463290030ff" containerID="8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73" exitCode=0 Jan 30 22:02:13 crc kubenswrapper[4979]: I0130 22:02:13.211123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerDied","Data":"8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73"} Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.543656 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.593505 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"fac7007d-8147-477c-a42e-2463290030ff\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.593785 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"fac7007d-8147-477c-a42e-2463290030ff\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.594726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"fac7007d-8147-477c-a42e-2463290030ff\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.600645 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm" (OuterVolumeSpecName: "kube-api-access-xgdvm") pod "fac7007d-8147-477c-a42e-2463290030ff" (UID: "fac7007d-8147-477c-a42e-2463290030ff"). InnerVolumeSpecName "kube-api-access-xgdvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.625218 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fac7007d-8147-477c-a42e-2463290030ff" (UID: "fac7007d-8147-477c-a42e-2463290030ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.648893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data" (OuterVolumeSpecName: "config-data") pod "fac7007d-8147-477c-a42e-2463290030ff" (UID: "fac7007d-8147-477c-a42e-2463290030ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.696854 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.697195 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.697208 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.232898 4979 generic.go:334] "Generic (PLEG): container finished" podID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerID="2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92" exitCode=0 Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.233007 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerDied","Data":"2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92"} Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.235514 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerDied","Data":"b66b3c202ff49e3b9a37dcd38590680dac6fdd11f7dbfdf69a3e9361cda17e7e"} Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.235553 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b66b3c202ff49e3b9a37dcd38590680dac6fdd11f7dbfdf69a3e9361cda17e7e" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.235628 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.540530 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.540801 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" containerID="cri-o://c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9" gracePeriod=10 Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.562845 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:15 crc kubenswrapper[4979]: E0130 22:02:15.563363 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac7007d-8147-477c-a42e-2463290030ff" containerName="keystone-db-sync" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.563382 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac7007d-8147-477c-a42e-2463290030ff" containerName="keystone-db-sync" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.563600 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac7007d-8147-477c-a42e-2463290030ff" containerName="keystone-db-sync" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.564353 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.574739 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.575441 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.575868 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.577794 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.584724 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.584908 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.609372 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613583 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613677 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613782 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613824 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613897 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.630784 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.631246 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716282 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716353 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716462 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716493 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716583 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716609 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716645 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716672 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.733711 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.745902 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.754400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.757576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.764105 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.764639 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.766041 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.776704 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.783521 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5h7pb" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.783914 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.784157 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.793990 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818717 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818789 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818810 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818964 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818999 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819064 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819117 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819148 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.820603 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821309 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821470 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821677 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.889742 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.894782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920865 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920910 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920943 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920974 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.921059 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.925129 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.930780 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.932627 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.951811 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.952715 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cgj89" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.953059 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.953303 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.974993 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.994437 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.000661 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.017976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.018934 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.031267 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.031340 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.031410 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.032282 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.124685 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.137685 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.137800 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.137938 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.158081 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.173696 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.255796 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.349799 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.353331 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.363056 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.374715 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nknfn" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.392557 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.396372 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.401492 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerID="c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9" exitCode=0 Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.402835 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerDied","Data":"c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9"} Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.405296 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.446218 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459292 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459698 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459780 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.466594 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.477521 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.506451 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.508658 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.521660 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cxc2m" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.522163 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.530893 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.538173 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.559602 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562058 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562170 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562276 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562327 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562386 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562456 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562479 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562554 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562604 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.563336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.566427 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.571707 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.572131 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.585727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.592330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.592439 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.592951 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.600969 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665670 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665691 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665709 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665752 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665835 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665885 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665909 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665977 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.666001 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.670379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.671906 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.672828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.673460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.685633 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.693527 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.693607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.710310 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.724247 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.727869 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768607 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768753 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768780 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768882 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.769071 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.769119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.769157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.775507 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.779333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.782461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.783625 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.791266 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.792404 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.796421 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.905884 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.918174 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.957514 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.105878 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300490 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300720 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300768 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300892 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300940 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.314746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst" (OuterVolumeSpecName: "kube-api-access-2xmst") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "kube-api-access-2xmst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.345122 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.398130 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.399948 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config" (OuterVolumeSpecName: "config") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.406289 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.406455 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.406512 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.412115 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.415666 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.426084 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerDied","Data":"b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce"} Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.426157 4979 scope.go:117] "RemoveContainer" containerID="c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.426330 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.442680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.446092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerDied","Data":"55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71"} Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.446130 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.446191 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.472548 4979 scope.go:117] "RemoveContainer" containerID="50c11c6ba1f573a9bebf130bbdbf73d94684ce5c18c6c0476848fec8b87a100e" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.506112 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.507998 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508055 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508124 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508181 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508568 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508583 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508595 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.519964 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks" (OuterVolumeSpecName: "kube-api-access-4jxks") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "kube-api-access-4jxks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.526077 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.544518 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.554691 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.562116 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.570019 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80aa258c_fc1b_4379_8b50_ac89cb9b4568.slice/crio-aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3 WatchSource:0}: Error finding container aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3: Status 404 returned error can't find the container with id aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3 Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.586231 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.616258 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8481722d_b63c_4f8e_82e2_0960d719b46b.slice/crio-a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213 WatchSource:0}: Error finding container a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213: Status 404 returned error can't find the container with id a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213 Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.620210 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.620252 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.620270 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.703602 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.708912 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data" (OuterVolumeSpecName: "config-data") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.742269 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.754510 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.768568 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.773268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6043875b_c6a4_4cbd_919e_79a61239eaa6.slice/crio-6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd WatchSource:0}: Error finding container 6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd: Status 404 returned error can't find the container with id 6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.849436 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.871268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79723cfd_4e3c_446c_bdf1_5c2c997950a8.slice/crio-ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e WatchSource:0}: Error finding container ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e: Status 404 returned error can't find the container with id ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.983906 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.991609 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.260249 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.467363 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerStarted","Data":"f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.467451 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerStarted","Data":"9c6eba33d3f0c4b1f4edf70e3d95c55f24ea5e1f25cb0716ba0a75705be5252d"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.477527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerStarted","Data":"386d53c83a51fa8ebf1662105890a6cd9dd37690f36cb6bac7142c9df6dc4505"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.482745 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.501105 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xq8ms" podStartSLOduration=3.50108342 podStartE2EDuration="3.50108342s" podCreationTimestamp="2026-01-30 22:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:18.497376692 +0000 UTC m=+1334.458623745" watchObservedRunningTime="2026-01-30 22:02:18.50108342 +0000 UTC m=+1334.462330453" Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.502429 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerStarted","Data":"ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.510390 4979 generic.go:334] "Generic (PLEG): container finished" podID="fee781fe-922e-4053-a318-02f409afb0a4" containerID="66bd742e325dd7cacecdec1b82cf32a7698ec617add172b382f4d11ff21b5756" exitCode=0 Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.510634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerDied","Data":"66bd742e325dd7cacecdec1b82cf32a7698ec617add172b382f4d11ff21b5756"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.510675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerStarted","Data":"8bf4c071c0668d71e79b98c441bfd48214eb848e83591dafb62efd7aedf4343c"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.526777 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerStarted","Data":"d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.526905 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerStarted","Data":"a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.534373 4979 generic.go:334] "Generic (PLEG): container finished" podID="134a82db-d55c-4764-86d1-62146b42583f" containerID="7db1b8115ca37505061c796c7d7fb618ffe09453de2ac94daa33d9b28697993f" exitCode=0 Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.534646 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" event={"ID":"134a82db-d55c-4764-86d1-62146b42583f","Type":"ContainerDied","Data":"7db1b8115ca37505061c796c7d7fb618ffe09453de2ac94daa33d9b28697993f"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.534716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" event={"ID":"134a82db-d55c-4764-86d1-62146b42583f","Type":"ContainerStarted","Data":"6faa7501d297ac13cb24bec157535442206f4913e8b55307e20761f154eb1a60"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.541105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerStarted","Data":"aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.627004 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qjfmb" podStartSLOduration=3.626980517 podStartE2EDuration="3.626980517s" podCreationTimestamp="2026-01-30 22:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:18.606477962 +0000 UTC m=+1334.567724995" watchObservedRunningTime="2026-01-30 22:02:18.626980517 +0000 UTC m=+1334.588227540" Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.966010 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.106464 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" path="/var/lib/kubelet/pods/9f00645b-b1f2-447f-b5a0-b38147768d8f/volumes" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107330 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:02:19 crc kubenswrapper[4979]: E0130 22:02:19.107783 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerName="glance-db-sync" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107808 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerName="glance-db-sync" Jan 30 22:02:19 crc kubenswrapper[4979]: E0130 22:02:19.107848 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107857 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" Jan 30 22:02:19 crc kubenswrapper[4979]: E0130 22:02:19.107891 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="init" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107900 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="init" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.112297 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.112391 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerName="glance-db-sync" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.114066 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.125835 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286342 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286838 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286980 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.287005 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.287051 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.340886 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.389890 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.389986 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390060 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.392415 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.393366 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.394750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.399379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.400250 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.428518 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.493852 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.493952 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494146 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494297 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494555 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494634 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.519674 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g" (OuterVolumeSpecName: "kube-api-access-lxp7g") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "kube-api-access-lxp7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.538412 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.539069 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.542853 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config" (OuterVolumeSpecName: "config") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.542912 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.567549 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.577019 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598696 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598785 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598845 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598862 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598876 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598901 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.607447 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerStarted","Data":"a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe"} Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.607624 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" containerID="cri-o://a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe" gracePeriod=10 Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.607903 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.618781 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" event={"ID":"134a82db-d55c-4764-86d1-62146b42583f","Type":"ContainerDied","Data":"6faa7501d297ac13cb24bec157535442206f4913e8b55307e20761f154eb1a60"} Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.618858 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.618892 4979 scope.go:117] "RemoveContainer" containerID="7db1b8115ca37505061c796c7d7fb618ffe09453de2ac94daa33d9b28697993f" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.657482 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" podStartSLOduration=3.657448741 podStartE2EDuration="3.657448741s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:19.643897811 +0000 UTC m=+1335.605144854" watchObservedRunningTime="2026-01-30 22:02:19.657448741 +0000 UTC m=+1335.618695774" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.927247 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.953222 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.083441 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: E0130 22:02:20.084121 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134a82db-d55c-4764-86d1-62146b42583f" containerName="init" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.084147 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="134a82db-d55c-4764-86d1-62146b42583f" containerName="init" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.084378 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="134a82db-d55c-4764-86d1-62146b42583f" containerName="init" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.085516 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.090392 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.093730 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.100147 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tzvjb" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.107382 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220231 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220347 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220425 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220515 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220783 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220909 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.271345 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.309850 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.311522 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.315293 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322830 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322909 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322929 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322956 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323066 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323581 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323782 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.329126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.334056 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.336614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.343251 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.357473 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.362771 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.396992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.424452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425363 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425409 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425468 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425499 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425562 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425240 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527366 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527517 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527550 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527644 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527669 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.528530 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.529424 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.529777 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.538616 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.544606 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.550828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.561073 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.561995 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.646486 4979 generic.go:334] "Generic (PLEG): container finished" podID="fee781fe-922e-4053-a318-02f409afb0a4" containerID="a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe" exitCode=0 Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.646606 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerDied","Data":"a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe"} Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.650243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerStarted","Data":"0bbffd435fbf3836f4de2a4551e90534d72d8f16d6de3150a0817077872230f4"} Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.758186 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.941777 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.045797 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.045947 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.045976 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.046071 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.046100 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.046184 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.060358 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944" (OuterVolumeSpecName: "kube-api-access-bl944") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "kube-api-access-bl944". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.095899 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134a82db-d55c-4764-86d1-62146b42583f" path="/var/lib/kubelet/pods/134a82db-d55c-4764-86d1-62146b42583f/volumes" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.142833 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.148726 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.148763 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.165982 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.176097 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.188617 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.208468 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config" (OuterVolumeSpecName: "config") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255160 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255224 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255245 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255255 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.264696 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.602266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:21 crc kubenswrapper[4979]: W0130 22:02:21.609795 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57847e36_4024_4fcd_a141_ac9bac71a969.slice/crio-5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a WatchSource:0}: Error finding container 5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a: Status 404 returned error can't find the container with id 5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.672330 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerStarted","Data":"5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.676996 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerDied","Data":"8bf4c071c0668d71e79b98c441bfd48214eb848e83591dafb62efd7aedf4343c"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.677086 4979 scope.go:117] "RemoveContainer" containerID="a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.677245 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.699606 4979 generic.go:334] "Generic (PLEG): container finished" podID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerID="a84e16cda693df587eff75844a45206ef87069920f6876c4a2c9eb4f7fae9fbe" exitCode=0 Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.700177 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerDied","Data":"a84e16cda693df587eff75844a45206ef87069920f6876c4a2c9eb4f7fae9fbe"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.718671 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerStarted","Data":"0235a45d441f3c8cb176ee3ba5d7b3a592e383ad0a2a9bc2db0b906132b62220"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.772219 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.794115 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.807975 4979 scope.go:117] "RemoveContainer" containerID="66bd742e325dd7cacecdec1b82cf32a7698ec617add172b382f4d11ff21b5756" Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.760506 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerStarted","Data":"1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68"} Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.760996 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.779364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerStarted","Data":"5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31"} Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.797649 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" podStartSLOduration=3.79762184 podStartE2EDuration="3.79762184s" podCreationTimestamp="2026-01-30 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:22.788984011 +0000 UTC m=+1338.750231044" watchObservedRunningTime="2026-01-30 22:02:22.79762184 +0000 UTC m=+1338.758868873" Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.088494 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee781fe-922e-4053-a318-02f409afb0a4" path="/var/lib/kubelet/pods/fee781fe-922e-4053-a318-02f409afb0a4/volumes" Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.820144 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerStarted","Data":"d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675"} Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.825389 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerStarted","Data":"11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305"} Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.858821 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.858800051 podStartE2EDuration="4.858800051s" podCreationTimestamp="2026-01-30 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:23.842542508 +0000 UTC m=+1339.803789551" watchObservedRunningTime="2026-01-30 22:02:23.858800051 +0000 UTC m=+1339.820047074" Jan 30 22:02:24 crc kubenswrapper[4979]: I0130 22:02:24.838680 4979 generic.go:334] "Generic (PLEG): container finished" podID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerID="f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9" exitCode=0 Jan 30 22:02:24 crc kubenswrapper[4979]: I0130 22:02:24.838789 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerDied","Data":"f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.136537 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.144465 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" containerID="cri-o://5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.144647 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" containerID="cri-o://d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.265556 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881038 4979 generic.go:334] "Generic (PLEG): container finished" podID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerID="d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675" exitCode=0 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881423 4979 generic.go:334] "Generic (PLEG): container finished" podID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerID="5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31" exitCode=143 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881084 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerDied","Data":"d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881532 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerDied","Data":"5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.884836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerStarted","Data":"68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.885075 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" containerID="cri-o://68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.885052 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" containerID="cri-o://11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.923589 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.923565115 podStartE2EDuration="7.923565115s" podCreationTimestamp="2026-01-30 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:26.91360874 +0000 UTC m=+1342.874855783" watchObservedRunningTime="2026-01-30 22:02:26.923565115 +0000 UTC m=+1342.884812148" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.052681 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.120736 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.121583 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.121695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.121839 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.122010 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.122141 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.140453 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: E0130 22:02:27.150669 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57847e36_4024_4fcd_a141_ac9bac71a969.slice/crio-68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.156248 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts" (OuterVolumeSpecName: "scripts") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.156426 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz" (OuterVolumeSpecName: "kube-api-access-h4jcz") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "kube-api-access-h4jcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.158264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.171051 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data" (OuterVolumeSpecName: "config-data") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.204007 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226390 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226451 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226463 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226505 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226516 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226526 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.900667 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerDied","Data":"9c6eba33d3f0c4b1f4edf70e3d95c55f24ea5e1f25cb0716ba0a75705be5252d"} Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.900724 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6eba33d3f0c4b1f4edf70e3d95c55f24ea5e1f25cb0716ba0a75705be5252d" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.900728 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904078 4979 generic.go:334] "Generic (PLEG): container finished" podID="57847e36-4024-4fcd-a141-ac9bac71a969" containerID="68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747" exitCode=0 Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904122 4979 generic.go:334] "Generic (PLEG): container finished" podID="57847e36-4024-4fcd-a141-ac9bac71a969" containerID="11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305" exitCode=143 Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904162 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerDied","Data":"68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747"} Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerDied","Data":"11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305"} Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.262019 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.275634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343304 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:02:28 crc kubenswrapper[4979]: E0130 22:02:28.343912 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343928 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" Jan 30 22:02:28 crc kubenswrapper[4979]: E0130 22:02:28.343952 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerName="keystone-bootstrap" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343960 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerName="keystone-bootstrap" Jan 30 22:02:28 crc kubenswrapper[4979]: E0130 22:02:28.343980 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="init" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343988 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="init" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.344242 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerName="keystone-bootstrap" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.344267 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.345064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.348016 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.348264 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.348998 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.349196 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.349363 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.415178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.503919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504395 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504558 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504726 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504761 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606317 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606562 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606632 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.612849 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.613321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.613515 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.615436 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.626066 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.629110 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.718085 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.096129 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" path="/var/lib/kubelet/pods/a6e395ca-523e-41fa-99e7-54a7926bae7b/volumes" Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.573610 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.654782 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.655675 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" containerID="cri-o://11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b" gracePeriod=10 Jan 30 22:02:31 crc kubenswrapper[4979]: I0130 22:02:31.952346 4979 generic.go:334] "Generic (PLEG): container finished" podID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerID="11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b" exitCode=0 Jan 30 22:02:31 crc kubenswrapper[4979]: I0130 22:02:31.952562 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerDied","Data":"11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b"} Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.039873 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.039963 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.040122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.041052 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.041121 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c" gracePeriod=600 Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.966983 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c" exitCode=0 Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.967110 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c"} Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.967156 4979 scope.go:117] "RemoveContainer" containerID="d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.507132 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.507817 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qncf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-s58pz_openstack(9c59f1f7-caf7-4ab4-b405-dbf27330ff37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.509257 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-s58pz" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.977564 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-s58pz" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" Jan 30 22:02:34 crc kubenswrapper[4979]: I0130 22:02:34.212558 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.417585 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.417792 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrndr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cj64f_openstack(79723cfd-4e3c-446c-bdf1-5c2c997950a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.419813 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cj64f" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.990096 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cj64f" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" Jan 30 22:02:44 crc kubenswrapper[4979]: I0130 22:02:44.213278 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.104410 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerDied","Data":"9e5bb560297f4e0e8f2115f8c48331514e53ce9d31d3b53377b9d219de77d2e7"} Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.104909 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e5bb560297f4e0e8f2115f8c48331514e53ce9d31d3b53377b9d219de77d2e7" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.109170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerDied","Data":"5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a"} Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.109202 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.120783 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.129621 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.200974 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.201116 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.201187 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.201798 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs" (OuterVolumeSpecName: "logs") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202182 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202297 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202354 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202412 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202585 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202685 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.203421 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.203446 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.217279 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.217411 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw" (OuterVolumeSpecName: "kube-api-access-kd9hw") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "kube-api-access-kd9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.219255 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk" (OuterVolumeSpecName: "kube-api-access-gjdnk") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "kube-api-access-gjdnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.231640 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts" (OuterVolumeSpecName: "scripts") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.269801 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config" (OuterVolumeSpecName: "config") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.271668 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.292397 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.295134 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.297474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data" (OuterVolumeSpecName: "config-data") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306294 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306334 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306345 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306355 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306369 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306382 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306392 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306425 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306435 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306445 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.309048 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.325724 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.408078 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.408120 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.119067 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.120218 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.170407 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.198745 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.211080 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.231081 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247163 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247654 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247675 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247703 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247712 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247740 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="init" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247748 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="init" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247760 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247766 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247948 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247964 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247976 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.249375 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.254277 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.258609 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.259429 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.324826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.324900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325118 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325205 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325244 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325268 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325360 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427246 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427429 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427466 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427493 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427569 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427599 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427806 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427837 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427889 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.428023 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.436582 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.436906 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.437468 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.440428 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.449731 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.469304 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.528088 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.528267 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njts7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-cf4cw_openstack(80aa258c-fc1b-4379-8b50-ac89cb9b4568): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.530001 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-cf4cw" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.571883 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.704090 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.840841 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841440 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841524 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841553 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841652 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841812 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.842951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.844005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs" (OuterVolumeSpecName: "logs") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.849729 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts" (OuterVolumeSpecName: "scripts") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.850358 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.853698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d" (OuterVolumeSpecName: "kube-api-access-2w54d") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "kube-api-access-2w54d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.881362 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.934429 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data" (OuterVolumeSpecName: "config-data") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.949927 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.949967 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.949983 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950106 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950130 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950145 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950157 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.982299 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.986064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.052321 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.100621 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" path="/var/lib/kubelet/pods/57847e36-4024-4fcd-a141-ac9bac71a969/volumes" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.102091 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" path="/var/lib/kubelet/pods/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4/volumes" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.133530 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerDied","Data":"0235a45d441f3c8cb176ee3ba5d7b3a592e383ad0a2a9bc2db0b906132b62220"} Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.134012 4979 scope.go:117] "RemoveContainer" containerID="d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.133611 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.135904 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122"} Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.141298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c"} Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.146134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerStarted","Data":"95d8aa47cbce3a638a3e8c22804badd17f638cf4879d004b591bbbd61ab25324"} Jan 30 22:02:47 crc kubenswrapper[4979]: E0130 22:02:47.148301 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-cf4cw" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.173324 4979 scope.go:117] "RemoveContainer" containerID="5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.199910 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.216404 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.228500 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: E0130 22:02:47.229222 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" Jan 30 22:02:47 crc kubenswrapper[4979]: E0130 22:02:47.229289 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229298 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229535 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229565 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.231841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.236104 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.236478 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.255506 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360833 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360882 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360923 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360998 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.361060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.361148 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: W0130 22:02:47.367830 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e002e48_1108_41f0_a1de_5a6b89d9e534.slice/crio-deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08 WatchSource:0}: Error finding container deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08: Status 404 returned error can't find the container with id deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08 Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.368268 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.462922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463072 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463109 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463146 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463189 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463216 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463258 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463302 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.464501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.464727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.465069 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.471167 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.471744 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.477792 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.479717 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.503618 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.511590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.559636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.167784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerStarted","Data":"7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.172892 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerStarted","Data":"240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.175864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerStarted","Data":"7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.175923 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerStarted","Data":"deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.207764 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.222706 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dmn2z" podStartSLOduration=20.222673912 podStartE2EDuration="20.222673912s" podCreationTimestamp="2026-01-30 22:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:48.186836819 +0000 UTC m=+1364.148083852" watchObservedRunningTime="2026-01-30 22:02:48.222673912 +0000 UTC m=+1364.183920945" Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.224022 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s58pz" podStartSLOduration=2.017001483 podStartE2EDuration="32.224012938s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.634882834 +0000 UTC m=+1333.596129867" lastFinishedPulling="2026-01-30 22:02:47.841894279 +0000 UTC m=+1363.803141322" observedRunningTime="2026-01-30 22:02:48.207659042 +0000 UTC m=+1364.168906075" watchObservedRunningTime="2026-01-30 22:02:48.224012938 +0000 UTC m=+1364.185259971" Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.089810 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" path="/var/lib/kubelet/pods/3d906f2e-2930-4b79-adf3-1367943b9a75/volumes" Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.211003 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d"} Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.215375 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.232804 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerStarted","Data":"3e810c936e02f2844b80a87456dccb9adbb5f44faaa30ddef373326002018cd3"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.290860 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerStarted","Data":"24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.306231 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerStarted","Data":"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.306595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerStarted","Data":"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.337519 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.3374959220000004 podStartE2EDuration="4.337495922s" podCreationTimestamp="2026-01-30 22:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:50.324308712 +0000 UTC m=+1366.285555775" watchObservedRunningTime="2026-01-30 22:02:50.337495922 +0000 UTC m=+1366.298742955" Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.328967 4979 generic.go:334] "Generic (PLEG): container finished" podID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerID="7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89" exitCode=0 Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.329333 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerDied","Data":"7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89"} Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.330574 4979 generic.go:334] "Generic (PLEG): container finished" podID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerID="240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb" exitCode=0 Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.330604 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerDied","Data":"240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb"} Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.360213 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.360191514 podStartE2EDuration="5.360191514s" podCreationTimestamp="2026-01-30 22:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:50.356119947 +0000 UTC m=+1366.317367010" watchObservedRunningTime="2026-01-30 22:02:52.360191514 +0000 UTC m=+1368.321438537" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.376955 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerDied","Data":"386d53c83a51fa8ebf1662105890a6cd9dd37690f36cb6bac7142c9df6dc4505"} Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.377755 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="386d53c83a51fa8ebf1662105890a6cd9dd37690f36cb6bac7142c9df6dc4505" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.380257 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerDied","Data":"95d8aa47cbce3a638a3e8c22804badd17f638cf4879d004b591bbbd61ab25324"} Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.380304 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d8aa47cbce3a638a3e8c22804badd17f638cf4879d004b591bbbd61ab25324" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.407557 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.456581 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.572267 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.572349 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573288 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573464 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573527 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573562 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574528 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574626 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574655 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574790 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574845 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574984 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.576172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs" (OuterVolumeSpecName: "logs") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.579359 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts" (OuterVolumeSpecName: "scripts") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.582381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2" (OuterVolumeSpecName: "kube-api-access-qncf2") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "kube-api-access-qncf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.582687 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.582901 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.583013 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8" (OuterVolumeSpecName: "kube-api-access-6xcf8") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "kube-api-access-6xcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.585322 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts" (OuterVolumeSpecName: "scripts") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.614005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.615559 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.622256 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data" (OuterVolumeSpecName: "config-data") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.624614 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.637693 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data" (OuterVolumeSpecName: "config-data") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.652335 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.689881 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691482 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691504 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691527 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691537 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691547 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691560 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691568 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691579 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691589 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691600 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.391317 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerStarted","Data":"87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091"} Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.394879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5"} Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.394898 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.394898 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.395731 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.395779 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.422066 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cj64f" podStartSLOduration=3.174150592 podStartE2EDuration="41.422024068s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.896552449 +0000 UTC m=+1333.857799482" lastFinishedPulling="2026-01-30 22:02:56.144425925 +0000 UTC m=+1372.105672958" observedRunningTime="2026-01-30 22:02:57.421572896 +0000 UTC m=+1373.382819939" watchObservedRunningTime="2026-01-30 22:02:57.422024068 +0000 UTC m=+1373.383271101" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544118 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:02:57 crc kubenswrapper[4979]: E0130 22:02:57.544591 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerName="keystone-bootstrap" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544610 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerName="keystone-bootstrap" Jan 30 22:02:57 crc kubenswrapper[4979]: E0130 22:02:57.544624 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerName="placement-db-sync" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544632 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerName="placement-db-sync" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544800 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerName="placement-db-sync" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544815 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerName="keystone-bootstrap" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.545860 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.550337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.550989 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.552070 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.552130 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.552996 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nknfn" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.560702 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.560767 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.569995 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.636991 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.638951 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.648441 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.648796 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.649136 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.649820 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.649957 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.650056 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.659414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.662656 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.665760 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714376 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714516 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714569 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714602 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714731 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.815965 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816050 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816101 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816145 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816168 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816220 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816245 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816419 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.820135 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.821718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.824797 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.830948 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.831449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.833477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.849760 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.868743 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918538 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918631 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918743 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918773 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918813 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918843 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918910 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.935609 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.935691 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.936158 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.937163 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.937348 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.937600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.946778 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.969839 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.971683 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.977737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.001943 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.123498 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.123618 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.123927 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124014 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124158 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124251 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124379 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227232 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227266 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227308 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227411 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.228498 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.233633 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.234126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.239410 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.258749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.258845 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.259336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.282630 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.317763 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.352986 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.428887 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerStarted","Data":"3f6e1720fbdfa450cb84e7986470398e83ff14833f01c282921516d94399a109"} Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.429544 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.429612 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.628819 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:02:58 crc kubenswrapper[4979]: W0130 22:02:58.668749 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c29874_a63d_4d35_a1a6_256d811ac6f8.slice/crio-3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711 WatchSource:0}: Error finding container 3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711: Status 404 returned error can't find the container with id 3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711 Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.954982 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:02:58 crc kubenswrapper[4979]: W0130 22:02:58.982559 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc808d1a7_071b_4af7_b86d_adbc0e98803b.slice/crio-c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9 WatchSource:0}: Error finding container c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9: Status 404 returned error can't find the container with id c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9 Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.453995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerStarted","Data":"87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.454470 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerStarted","Data":"7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.456353 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerStarted","Data":"c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.458196 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerStarted","Data":"3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.956121 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.956288 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:03:00 crc kubenswrapper[4979]: I0130 22:03:00.066044 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:01 crc kubenswrapper[4979]: I0130 22:03:01.139278 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:03:01 crc kubenswrapper[4979]: I0130 22:03:01.139811 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:03:01 crc kubenswrapper[4979]: I0130 22:03:01.299497 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.497690 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerStarted","Data":"dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db"} Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.498808 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.502922 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerStarted","Data":"db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62"} Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503002 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerStarted","Data":"4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3"} Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503069 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503092 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503460 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503603 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.533417 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f5778c484-5rg8p" podStartSLOduration=5.533383499 podStartE2EDuration="5.533383499s" podCreationTimestamp="2026-01-30 22:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:02.528652433 +0000 UTC m=+1378.489899476" watchObservedRunningTime="2026-01-30 22:03:02.533383499 +0000 UTC m=+1378.494630542" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.555981 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5574d874bd-cg256" podStartSLOduration=5.555956689 podStartE2EDuration="5.555956689s" podCreationTimestamp="2026-01-30 22:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:02.552807265 +0000 UTC m=+1378.514054318" watchObservedRunningTime="2026-01-30 22:03:02.555956689 +0000 UTC m=+1378.517203722" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.595789 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8467c9fd48-4d9pm" podStartSLOduration=5.595763397 podStartE2EDuration="5.595763397s" podCreationTimestamp="2026-01-30 22:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:02.58044428 +0000 UTC m=+1378.541691313" watchObservedRunningTime="2026-01-30 22:03:02.595763397 +0000 UTC m=+1378.557010430" Jan 30 22:03:04 crc kubenswrapper[4979]: I0130 22:03:04.524322 4979 generic.go:334] "Generic (PLEG): container finished" podID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerID="87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091" exitCode=0 Jan 30 22:03:04 crc kubenswrapper[4979]: I0130 22:03:04.526159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerDied","Data":"87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091"} Jan 30 22:03:04 crc kubenswrapper[4979]: I0130 22:03:04.649287 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:05 crc kubenswrapper[4979]: I0130 22:03:05.615946 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.421627 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.457128 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.457349 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.457419 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.468629 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr" (OuterVolumeSpecName: "kube-api-access-zrndr") pod "79723cfd-4e3c-446c-bdf1-5c2c997950a8" (UID: "79723cfd-4e3c-446c-bdf1-5c2c997950a8"). InnerVolumeSpecName "kube-api-access-zrndr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.476192 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "79723cfd-4e3c-446c-bdf1-5c2c997950a8" (UID: "79723cfd-4e3c-446c-bdf1-5c2c997950a8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.493126 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79723cfd-4e3c-446c-bdf1-5c2c997950a8" (UID: "79723cfd-4e3c-446c-bdf1-5c2c997950a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.549138 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerDied","Data":"ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e"} Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.549196 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.549208 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.561004 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.561070 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.561080 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.829819 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:03:06 crc kubenswrapper[4979]: E0130 22:03:06.830258 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerName="barbican-db-sync" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.830278 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerName="barbican-db-sync" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.830547 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerName="barbican-db-sync" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.831786 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.834245 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.834594 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cxc2m" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.837559 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865372 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865524 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.887322 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.889610 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.892947 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968186 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968707 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968933 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968978 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.969006 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.969057 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.970851 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.982335 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.983583 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.002159 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.002931 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.004944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.040412 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.042543 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.057227 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.070894 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071165 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071290 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.076291 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.078896 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.093663 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.097322 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.114556 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.114986 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.149899 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.151810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.156833 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.160494 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.166181 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.176929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177479 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177669 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177783 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.178120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.242090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280394 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280465 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280493 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280640 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280708 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280749 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.282126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.286522 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.286583 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.286756 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.289488 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.308118 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.380285 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382467 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382510 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382575 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.383266 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.387114 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.387534 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.387897 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.403904 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.469963 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.053498 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.069071 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.183178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.192299 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:08 crc kubenswrapper[4979]: W0130 22:03:08.192562 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda48297f7_feed_4cde_9fb5_bb823c838752.slice/crio-c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5 WatchSource:0}: Error finding container c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5: Status 404 returned error can't find the container with id c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573362 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.574371 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573572 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" containerID="cri-o://1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573491 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" containerID="cri-o://f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573683 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" containerID="cri-o://a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573713 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" containerID="cri-o://5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.576578 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerStarted","Data":"c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.580235 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerStarted","Data":"bb24789e94c037f8d2c30cb247391e1793581183cde1ad3d02b4c483f6507c5b"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.586316 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerStarted","Data":"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.586384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerStarted","Data":"03e1b95e9a7f4f77b0e701bca53f07e0dfe1f445b0928c440b8370f19dcd14de"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.588669 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerStarted","Data":"3dde96c5169697a3e0c9d8b160bc83a4fafb1d44e05b294c10a09b1f06d958c9"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.609772 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.830771924 podStartE2EDuration="52.609752773s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.784255144 +0000 UTC m=+1333.745502177" lastFinishedPulling="2026-01-30 22:03:07.563235993 +0000 UTC m=+1383.524483026" observedRunningTime="2026-01-30 22:03:08.603824296 +0000 UTC m=+1384.565071329" watchObservedRunningTime="2026-01-30 22:03:08.609752773 +0000 UTC m=+1384.570999806" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613534 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613890 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" exitCode=2 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613898 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613941 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613970 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613982 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.616054 4979 generic.go:334] "Generic (PLEG): container finished" podID="a48297f7-feed-4cde-9fb5-bb823c838752" containerID="adbbc2a81ab034dd96c63d4ba709ca63691a9f7f475eee828c2446c45a19e39c" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.616116 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerDied","Data":"adbbc2a81ab034dd96c63d4ba709ca63691a9f7f475eee828c2446c45a19e39c"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.619978 4979 generic.go:334] "Generic (PLEG): container finished" podID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerID="d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.620094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerDied","Data":"d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.623223 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerStarted","Data":"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.623451 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.633697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerStarted","Data":"009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.655501 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.658070 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.661359 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.662580 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.684688 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.684695 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-cf4cw" podStartSLOduration=4.804469838 podStartE2EDuration="54.684674919s" podCreationTimestamp="2026-01-30 22:02:15 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.631998676 +0000 UTC m=+1333.593245719" lastFinishedPulling="2026-01-30 22:03:07.512203767 +0000 UTC m=+1383.473450800" observedRunningTime="2026-01-30 22:03:09.671335185 +0000 UTC m=+1385.632582218" watchObservedRunningTime="2026-01-30 22:03:09.684674919 +0000 UTC m=+1385.645921952" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.731362 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5455fcc558-tkb7p" podStartSLOduration=2.731339159 podStartE2EDuration="2.731339159s" podCreationTimestamp="2026-01-30 22:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:09.716741982 +0000 UTC m=+1385.677989005" watchObservedRunningTime="2026-01-30 22:03:09.731339159 +0000 UTC m=+1385.692586192" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753541 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753969 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855739 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855834 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855973 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.856052 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.856070 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.857649 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.864586 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.865245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.865937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.867002 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.870138 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.874930 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.984932 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.644295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerStarted","Data":"0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1"} Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.650386 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerStarted","Data":"2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5"} Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.650671 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.662192 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerStarted","Data":"d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc"} Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.670364 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.692255 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" podStartSLOduration=4.692234205 podStartE2EDuration="4.692234205s" podCreationTimestamp="2026-01-30 22:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:10.690835497 +0000 UTC m=+1386.652082530" watchObservedRunningTime="2026-01-30 22:03:10.692234205 +0000 UTC m=+1386.653481238" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.726114 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:03:10 crc kubenswrapper[4979]: W0130 22:03:10.739586 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c466a98_f01c_49ab_841a_8f35c54e71f3.slice/crio-ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961 WatchSource:0}: Error finding container ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961: Status 404 returned error can't find the container with id ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961 Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.082691 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.212290 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"8481722d-b63c-4f8e-82e2-0960d719b46b\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.212825 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"8481722d-b63c-4f8e-82e2-0960d719b46b\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.212852 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"8481722d-b63c-4f8e-82e2-0960d719b46b\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.223463 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2" (OuterVolumeSpecName: "kube-api-access-vpvb2") pod "8481722d-b63c-4f8e-82e2-0960d719b46b" (UID: "8481722d-b63c-4f8e-82e2-0960d719b46b"). InnerVolumeSpecName "kube-api-access-vpvb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.268834 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8481722d-b63c-4f8e-82e2-0960d719b46b" (UID: "8481722d-b63c-4f8e-82e2-0960d719b46b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.301176 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config" (OuterVolumeSpecName: "config") pod "8481722d-b63c-4f8e-82e2-0960d719b46b" (UID: "8481722d-b63c-4f8e-82e2-0960d719b46b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.315708 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.315997 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.316132 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.707638 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerStarted","Data":"1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.716135 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerStarted","Data":"9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.753662 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" podStartSLOduration=3.64259039 podStartE2EDuration="5.753636301s" podCreationTimestamp="2026-01-30 22:03:06 +0000 UTC" firstStartedPulling="2026-01-30 22:03:08.08188198 +0000 UTC m=+1384.043129013" lastFinishedPulling="2026-01-30 22:03:10.192927891 +0000 UTC m=+1386.154174924" observedRunningTime="2026-01-30 22:03:11.732095208 +0000 UTC m=+1387.693342241" watchObservedRunningTime="2026-01-30 22:03:11.753636301 +0000 UTC m=+1387.714883334" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.759746 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerStarted","Data":"b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.759806 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerStarted","Data":"edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.759820 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerStarted","Data":"ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.760258 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.760336 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.764182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerDied","Data":"a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.764236 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.764397 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.771211 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" podStartSLOduration=3.6373820820000002 podStartE2EDuration="5.771184557s" podCreationTimestamp="2026-01-30 22:03:06 +0000 UTC" firstStartedPulling="2026-01-30 22:03:08.081534101 +0000 UTC m=+1384.042781134" lastFinishedPulling="2026-01-30 22:03:10.215336576 +0000 UTC m=+1386.176583609" observedRunningTime="2026-01-30 22:03:11.75700611 +0000 UTC m=+1387.718253143" watchObservedRunningTime="2026-01-30 22:03:11.771184557 +0000 UTC m=+1387.732431590" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.808706 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cd6984846-6pk8x" podStartSLOduration=2.8086799940000002 podStartE2EDuration="2.808679994s" podCreationTimestamp="2026-01-30 22:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:11.798433432 +0000 UTC m=+1387.759680465" watchObservedRunningTime="2026-01-30 22:03:11.808679994 +0000 UTC m=+1387.769927027" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.022655 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.073793 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:12 crc kubenswrapper[4979]: E0130 22:03:12.074577 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerName="neutron-db-sync" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.074605 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerName="neutron-db-sync" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.074994 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerName="neutron-db-sync" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.079612 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.139625 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.209728 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.211825 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.219772 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.220157 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.220386 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.220886 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cgj89" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.221749 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250052 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250112 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250215 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250347 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352730 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352921 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352967 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353085 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353226 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353289 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.355490 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.355963 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.356502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.357095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.357389 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.390973 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.440188 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.454977 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455608 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455723 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.462368 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.463185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.464046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.472676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.493517 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.542999 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.592069 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.762455 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763052 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763115 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763158 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763240 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763368 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.764104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.764751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.768658 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd" (OuterVolumeSpecName: "kube-api-access-xwcxd") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "kube-api-access-xwcxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.787436 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts" (OuterVolumeSpecName: "scripts") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.810050 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.840933 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" exitCode=0 Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841490 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841564 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d"} Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841616 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd"} Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841641 4979 scope.go:117] "RemoveContainer" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.845227 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" containerID="cri-o://2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5" gracePeriod=10 Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866123 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866157 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866166 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866175 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866184 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.902269 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.917811 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data" (OuterVolumeSpecName: "config-data") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.969287 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.969324 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.032641 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.170895 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.188590 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.221075 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.221995 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222110 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.222144 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222152 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.222174 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222180 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.222195 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222204 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222473 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222515 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222532 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222552 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.225547 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.229565 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.229844 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.230190 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.346167 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381169 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381516 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381833 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381947 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.382019 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.382180 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.382359 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484584 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484706 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484788 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484870 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484916 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484989 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.485593 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.485966 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.494477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.495304 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.503014 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.503965 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.506405 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.548997 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.853192 4979 generic.go:334] "Generic (PLEG): container finished" podID="a48297f7-feed-4cde-9fb5-bb823c838752" containerID="2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5" exitCode=0 Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.853256 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerDied","Data":"2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5"} Jan 30 22:03:14 crc kubenswrapper[4979]: I0130 22:03:14.984269 4979 scope.go:117] "RemoveContainer" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" Jan 30 22:03:14 crc kubenswrapper[4979]: W0130 22:03:14.995489 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc2890a_dab6_4a6e_a7fd_a26feb5b2bb8.slice/crio-0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9 WatchSource:0}: Error finding container 0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9: Status 404 returned error can't find the container with id 0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9 Jan 30 22:03:14 crc kubenswrapper[4979]: W0130 22:03:14.996564 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba4b7345_9c9c_46e9_ac9a_d84093867012.slice/crio-d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc WatchSource:0}: Error finding container d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc: Status 404 returned error can't find the container with id d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.088905 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" path="/var/lib/kubelet/pods/6043875b-c6a4-4cbd-919e-79a61239eaa6/volumes" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.213053 4979 scope.go:117] "RemoveContainer" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.316831 4979 scope.go:117] "RemoveContainer" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.356450 4979 scope.go:117] "RemoveContainer" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.358965 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9\": container with ID starting with 1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9 not found: ID does not exist" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.359075 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9"} err="failed to get container status \"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9\": rpc error: code = NotFound desc = could not find container \"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9\": container with ID starting with 1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9 not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.359140 4979 scope.go:117] "RemoveContainer" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.360714 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5\": container with ID starting with 5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5 not found: ID does not exist" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.360765 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5"} err="failed to get container status \"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5\": rpc error: code = NotFound desc = could not find container \"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5\": container with ID starting with 5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5 not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.360800 4979 scope.go:117] "RemoveContainer" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.362015 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d\": container with ID starting with a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d not found: ID does not exist" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.362122 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d"} err="failed to get container status \"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d\": rpc error: code = NotFound desc = could not find container \"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d\": container with ID starting with a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.362191 4979 scope.go:117] "RemoveContainer" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.362991 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122\": container with ID starting with f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122 not found: ID does not exist" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.363057 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122"} err="failed to get container status \"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122\": rpc error: code = NotFound desc = could not find container \"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122\": container with ID starting with f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122 not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.397211 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.664589 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.667449 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.671276 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.671784 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.716374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746094 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746236 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746283 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746330 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746440 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746914 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.833967 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849665 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849752 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.850024 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.850157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.850398 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.860237 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.873121 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.877297 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.880468 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.886577 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.888686 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.888916 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.898020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"6dbed89dcb99abab4522a3860a00ee5c7bea5cb37a875572e8e74067b72a1d9c"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerStarted","Data":"a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902650 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerStarted","Data":"31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902668 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerStarted","Data":"d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902934 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.909298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerDied","Data":"c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.909366 4979 scope.go:117] "RemoveContainer" containerID="2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.909430 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.911056 4979 generic.go:334] "Generic (PLEG): container finished" podID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerID="28fa5fdce3759a70252b84e9d2a3128dd1ea647aeca78f30af0e925e772a5b64" exitCode=0 Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.911109 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerDied","Data":"28fa5fdce3759a70252b84e9d2a3128dd1ea647aeca78f30af0e925e772a5b64"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.911133 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerStarted","Data":"0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.938170 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-575496bbc6-tpmv9" podStartSLOduration=3.938143052 podStartE2EDuration="3.938143052s" podCreationTimestamp="2026-01-30 22:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:15.92189522 +0000 UTC m=+1391.883142273" watchObservedRunningTime="2026-01-30 22:03:15.938143052 +0000 UTC m=+1391.899390085" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953266 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953343 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953427 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953519 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953669 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953721 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.965262 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d" (OuterVolumeSpecName: "kube-api-access-w5n8d") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "kube-api-access-w5n8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.993912 4979 scope.go:117] "RemoveContainer" containerID="adbbc2a81ab034dd96c63d4ba709ca63691a9f7f475eee828c2446c45a19e39c" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.997678 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.048095 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.048092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.056284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.056300 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config" (OuterVolumeSpecName: "config") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060062 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060102 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060115 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060127 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060139 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.064773 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.165696 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.313194 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.332798 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.710918 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.947343 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerStarted","Data":"079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea"} Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.949591 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.951400 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerStarted","Data":"ca8441f7e30661b52f9821e4f8bade797db77f1bc59f74f658c35d0b1cade61a"} Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.958144 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56"} Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.095759 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" path="/var/lib/kubelet/pods/a48297f7-feed-4cde-9fb5-bb823c838752/volumes" Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.971705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerStarted","Data":"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260"} Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.972617 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerStarted","Data":"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a"} Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.972640 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.975185 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8"} Jan 30 22:03:18 crc kubenswrapper[4979]: I0130 22:03:18.000994 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-ccc5789d5-9fbcz" podStartSLOduration=3.000968471 podStartE2EDuration="3.000968471s" podCreationTimestamp="2026-01-30 22:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:17.996501822 +0000 UTC m=+1393.957748875" watchObservedRunningTime="2026-01-30 22:03:18.000968471 +0000 UTC m=+1393.962215504" Jan 30 22:03:18 crc kubenswrapper[4979]: I0130 22:03:18.003358 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" podStartSLOduration=6.003347293 podStartE2EDuration="6.003347293s" podCreationTimestamp="2026-01-30 22:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:16.997557416 +0000 UTC m=+1392.958804449" watchObservedRunningTime="2026-01-30 22:03:18.003347293 +0000 UTC m=+1393.964594326" Jan 30 22:03:19 crc kubenswrapper[4979]: I0130 22:03:19.010933 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f"} Jan 30 22:03:19 crc kubenswrapper[4979]: I0130 22:03:19.579351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:19 crc kubenswrapper[4979]: I0130 22:03:19.641335 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:20 crc kubenswrapper[4979]: I0130 22:03:20.022864 4979 generic.go:334] "Generic (PLEG): container finished" podID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerID="009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3" exitCode=0 Jan 30 22:03:20 crc kubenswrapper[4979]: I0130 22:03:20.024224 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerDied","Data":"009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3"} Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.497686 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.599552 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.599629 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.599770 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600045 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600076 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600161 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600359 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600667 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.619447 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts" (OuterVolumeSpecName: "scripts") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.622508 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7" (OuterVolumeSpecName: "kube-api-access-njts7") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "kube-api-access-njts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.633204 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.674202 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data" (OuterVolumeSpecName: "config-data") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.681525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702518 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702568 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702583 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702595 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702605 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.999866 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.048998 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32"} Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.049193 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.051487 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerDied","Data":"aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3"} Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.051519 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.051569 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.075322 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.2735942 podStartE2EDuration="9.075289773s" podCreationTimestamp="2026-01-30 22:03:13 +0000 UTC" firstStartedPulling="2026-01-30 22:03:15.407136666 +0000 UTC m=+1391.368383699" lastFinishedPulling="2026-01-30 22:03:21.208832229 +0000 UTC m=+1397.170079272" observedRunningTime="2026-01-30 22:03:22.072876748 +0000 UTC m=+1398.034123781" watchObservedRunningTime="2026-01-30 22:03:22.075289773 +0000 UTC m=+1398.036536816" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.127116 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.214207 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.214461 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" containerID="cri-o://73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" gracePeriod=30 Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.214943 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" containerID="cri-o://3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" gracePeriod=30 Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.435128 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: E0130 22:03:22.445539 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445600 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" Jan 30 22:03:22 crc kubenswrapper[4979]: E0130 22:03:22.445641 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="init" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445648 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="init" Jan 30 22:03:22 crc kubenswrapper[4979]: E0130 22:03:22.445666 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerName="cinder-db-sync" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445673 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerName="cinder-db-sync" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445957 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerName="cinder-db-sync" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445981 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.447050 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.448105 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.450831 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5h7pb" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.451482 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.452718 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.457799 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.479740 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532646 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532827 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532881 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532912 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532938 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.606321 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.606570 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" containerID="cri-o://1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68" gracePeriod=10 Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635545 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635586 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635671 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635725 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635822 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.644130 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.644527 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.646588 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.646742 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.650647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.670127 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.701800 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.701988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741771 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741886 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741916 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741939 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.742004 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.742073 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.781826 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.812130 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.813982 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.819360 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.821622 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.845893 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846023 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846066 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846090 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846109 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846190 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846217 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846280 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846312 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846342 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846363 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.847670 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.847829 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.848232 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.849288 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.859196 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.867060 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.918631 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951576 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951690 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951764 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951897 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.952720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.966206 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.972334 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.977643 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.994306 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.994384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.092953 4979 generic.go:334] "Generic (PLEG): container finished" podID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" exitCode=143 Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.098108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerDied","Data":"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8"} Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.105487 4979 generic.go:334] "Generic (PLEG): container finished" podID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerID="1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68" exitCode=0 Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.105770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerDied","Data":"1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68"} Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.243751 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.470072 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.476614 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.479615 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.480258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.525387 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp" (OuterVolumeSpecName: "kube-api-access-4l7jp") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "kube-api-access-4l7jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.584300 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.584370 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.584421 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.585162 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.590207 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.633143 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.656773 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.677506 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.693068 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.693125 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.717272 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.746594 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config" (OuterVolumeSpecName: "config") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.752327 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.796692 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.797165 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.797205 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: W0130 22:03:23.897637 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc9229_6c16_4bd2_b677_f26acb49716e.slice/crio-8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6 WatchSource:0}: Error finding container 8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6: Status 404 returned error can't find the container with id 8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6 Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.907964 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.125609 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerStarted","Data":"d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.125683 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerStarted","Data":"bd0c08ab5da0f9972ab0ecfaa7d4a96b3e692f626faf2e99b754b19a6fd17552"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.129314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerStarted","Data":"9e233c467b56b274cf91a0fd383468a12ee48c944ec900a8f2ba3fafe0a3e4a7"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.139324 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerStarted","Data":"8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.146189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerDied","Data":"0bbffd435fbf3836f4de2a4551e90534d72d8f16d6de3150a0817077872230f4"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.146535 4979 scope.go:117] "RemoveContainer" containerID="1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68" Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.146841 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.188361 4979 scope.go:117] "RemoveContainer" containerID="a84e16cda693df587eff75844a45206ef87069920f6876c4a2c9eb4f7fae9fbe" Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.197352 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.205339 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.090000 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" path="/var/lib/kubelet/pods/734e25b4-90d2-466b-a71d-029b7fd4b491/volumes" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.162711 4979 generic.go:334] "Generic (PLEG): container finished" podID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerID="d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655" exitCode=0 Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.163176 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerDied","Data":"d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655"} Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.163475 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerStarted","Data":"cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3"} Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.163605 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.185561 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerStarted","Data":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.218890 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" podStartSLOduration=3.218861002 podStartE2EDuration="3.218861002s" podCreationTimestamp="2026-01-30 22:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:25.203969995 +0000 UTC m=+1401.165217018" watchObservedRunningTime="2026-01-30 22:03:25.218861002 +0000 UTC m=+1401.180108025" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.438402 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.463677 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:37790->10.217.0.159:9311: read: connection reset by peer" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.463676 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:37786->10.217.0.159:9311: read: connection reset by peer" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.039965 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.048932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.048987 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.049847 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.050009 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.050131 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.050650 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs" (OuterVolumeSpecName: "logs") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.051266 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.057760 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.060184 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2" (OuterVolumeSpecName: "kube-api-access-xtvw2") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "kube-api-access-xtvw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.089245 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.116876 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data" (OuterVolumeSpecName: "config-data") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153157 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153192 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153202 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153212 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.226928 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerStarted","Data":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.227125 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" containerID="cri-o://b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" gracePeriod=30 Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.227386 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.227677 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" containerID="cri-o://ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" gracePeriod=30 Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241244 4979 generic.go:334] "Generic (PLEG): container finished" podID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" exitCode=0 Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241328 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerDied","Data":"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerDied","Data":"03e1b95e9a7f4f77b0e701bca53f07e0dfe1f445b0928c440b8370f19dcd14de"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241395 4979 scope.go:117] "RemoveContainer" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241527 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.248091 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerStarted","Data":"dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.265927 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.265893786 podStartE2EDuration="4.265893786s" podCreationTimestamp="2026-01-30 22:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:26.251055071 +0000 UTC m=+1402.212302124" watchObservedRunningTime="2026-01-30 22:03:26.265893786 +0000 UTC m=+1402.227140829" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.329799 4979 scope.go:117] "RemoveContainer" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.383778 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.406093 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.429539 4979 scope.go:117] "RemoveContainer" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" Jan 30 22:03:26 crc kubenswrapper[4979]: E0130 22:03:26.430310 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67\": container with ID starting with 3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67 not found: ID does not exist" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.430385 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67"} err="failed to get container status \"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67\": rpc error: code = NotFound desc = could not find container \"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67\": container with ID starting with 3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67 not found: ID does not exist" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.430433 4979 scope.go:117] "RemoveContainer" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" Jan 30 22:03:26 crc kubenswrapper[4979]: E0130 22:03:26.432765 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8\": container with ID starting with 73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8 not found: ID does not exist" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.432837 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8"} err="failed to get container status \"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8\": rpc error: code = NotFound desc = could not find container \"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8\": container with ID starting with 73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.086264 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" path="/var/lib/kubelet/pods/0aa8f9d6-442a-4070-b11f-13564f4c2c43/volumes" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.228862 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.264782 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerStarted","Data":"af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267121 4979 generic.go:334] "Generic (PLEG): container finished" podID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" exitCode=0 Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267156 4979 generic.go:334] "Generic (PLEG): container finished" podID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" exitCode=143 Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267258 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerDied","Data":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267511 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerDied","Data":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267535 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerDied","Data":"8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267563 4979 scope.go:117] "RemoveContainer" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299047 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299130 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299243 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299365 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299386 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.300269 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs" (OuterVolumeSpecName: "logs") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.304405 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.310287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts" (OuterVolumeSpecName: "scripts") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.310535 4979 scope.go:117] "RemoveContainer" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.312673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv" (OuterVolumeSpecName: "kube-api-access-66tcv") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "kube-api-access-66tcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.313346 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.323993 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.404963812 podStartE2EDuration="5.323965903s" podCreationTimestamp="2026-01-30 22:03:22 +0000 UTC" firstStartedPulling="2026-01-30 22:03:23.592053394 +0000 UTC m=+1399.553300417" lastFinishedPulling="2026-01-30 22:03:24.511055475 +0000 UTC m=+1400.472302508" observedRunningTime="2026-01-30 22:03:27.301589749 +0000 UTC m=+1403.262836792" watchObservedRunningTime="2026-01-30 22:03:27.323965903 +0000 UTC m=+1403.285212936" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.371449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.386273 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data" (OuterVolumeSpecName: "config-data") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402672 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402742 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402758 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402772 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402787 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402794 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402801 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.481173 4979 scope.go:117] "RemoveContainer" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.481735 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": container with ID starting with ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91 not found: ID does not exist" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.481808 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} err="failed to get container status \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": rpc error: code = NotFound desc = could not find container \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": container with ID starting with ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.481853 4979 scope.go:117] "RemoveContainer" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.482291 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": container with ID starting with b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419 not found: ID does not exist" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.482709 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} err="failed to get container status \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": rpc error: code = NotFound desc = could not find container \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": container with ID starting with b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.482735 4979 scope.go:117] "RemoveContainer" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.483851 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} err="failed to get container status \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": rpc error: code = NotFound desc = could not find container \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": container with ID starting with ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.483894 4979 scope.go:117] "RemoveContainer" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.484313 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} err="failed to get container status \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": rpc error: code = NotFound desc = could not find container \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": container with ID starting with b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.617555 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.641224 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.661786 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.662677 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="init" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.662774 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="init" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.662887 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.662946 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663045 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663133 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663227 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663283 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663358 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663421 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663485 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663553 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663840 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663959 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.664047 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.664118 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.664175 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.665368 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.670485 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.671095 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.671181 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.677909 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.708703 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.708761 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709116 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709241 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709292 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709331 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709378 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709445 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.782848 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810780 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810871 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810917 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810987 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811049 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811087 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811123 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811147 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811193 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811813 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.816535 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.816687 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.816903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.819073 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.819999 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.820922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.834749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:28 crc kubenswrapper[4979]: I0130 22:03:28.028300 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:28 crc kubenswrapper[4979]: I0130 22:03:28.496969 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.121388 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" path="/var/lib/kubelet/pods/c0bc9229-6c16-4bd2-b677-f26acb49716e/volumes" Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.324081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerStarted","Data":"70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7"} Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.324547 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerStarted","Data":"63cab1632ab5734414fe0ad9e4d6c6c07d6d67f4ee2af410de1ca78ec4b0eb26"} Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.417177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.604139 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.683819 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.686735 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8467c9fd48-4d9pm" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" containerID="cri-o://87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22" gracePeriod=30 Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.686882 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8467c9fd48-4d9pm" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" containerID="cri-o://7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d" gracePeriod=30 Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.240293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.336591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerStarted","Data":"33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9"} Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.336846 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.341111 4979 generic.go:334] "Generic (PLEG): container finished" podID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerID="7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d" exitCode=143 Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.341301 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerDied","Data":"7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d"} Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.377062 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.377019347 podStartE2EDuration="3.377019347s" podCreationTimestamp="2026-01-30 22:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:30.357708013 +0000 UTC m=+1406.318955046" watchObservedRunningTime="2026-01-30 22:03:30.377019347 +0000 UTC m=+1406.338266380" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.923351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.925856 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.928201 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.929647 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9brkn" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.929883 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.943368 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.947960 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.065087 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.065485 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" containerID="cri-o://079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea" gracePeriod=10 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.067697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.067894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.068056 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.068120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.113596 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170056 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170746 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170942 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.173222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.183024 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.183046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.194624 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.275564 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.334100 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.350849 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.365348 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.366793 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.382363 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.458936 4979 generic.go:334] "Generic (PLEG): container finished" podID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerID="87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22" exitCode=0 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.459023 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerDied","Data":"87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22"} Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.460956 4979 generic.go:334] "Generic (PLEG): container finished" podID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerID="079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea" exitCode=0 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.461268 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" containerID="cri-o://dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb" gracePeriod=30 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.461412 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" containerID="cri-o://af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d" gracePeriod=30 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.461084 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerDied","Data":"079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea"} Jan 30 22:03:33 crc kubenswrapper[4979]: E0130 22:03:33.473613 4979 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 22:03:33 crc kubenswrapper[4979]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_2b9a35db-944b-404f-8936-55d7bf448619_0(b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088" Netns:"/var/run/netns/386f794f-236f-447a-8437-ea21352d89c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088;K8S_POD_UID=2b9a35db-944b-404f-8936-55d7bf448619" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/2b9a35db-944b-404f-8936-55d7bf448619]: expected pod UID "2b9a35db-944b-404f-8936-55d7bf448619" but got "82508003-60c8-463b-92a9-bc9521fcfa03" from Kube API Jan 30 22:03:33 crc kubenswrapper[4979]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 22:03:33 crc kubenswrapper[4979]: > Jan 30 22:03:33 crc kubenswrapper[4979]: E0130 22:03:33.473702 4979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 22:03:33 crc kubenswrapper[4979]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_2b9a35db-944b-404f-8936-55d7bf448619_0(b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088" Netns:"/var/run/netns/386f794f-236f-447a-8437-ea21352d89c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088;K8S_POD_UID=2b9a35db-944b-404f-8936-55d7bf448619" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/2b9a35db-944b-404f-8936-55d7bf448619]: expected pod UID "2b9a35db-944b-404f-8936-55d7bf448619" but got "82508003-60c8-463b-92a9-bc9521fcfa03" from Kube API Jan 30 22:03:33 crc kubenswrapper[4979]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 22:03:33 crc kubenswrapper[4979]: > pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.478512 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.478795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.478966 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.479186 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.573285 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584309 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584579 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.585702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.592126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.595146 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.605859 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.685967 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686470 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686552 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686697 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686751 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686799 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.697692 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp" (OuterVolumeSpecName: "kube-api-access-2dsxp") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "kube-api-access-2dsxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.737682 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.757537 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.774401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.776934 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.793444 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.794384 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.794497 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.794578 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.797146 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config" (OuterVolumeSpecName: "config") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.810222 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.813511 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896525 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896615 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896733 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896802 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.897877 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs" (OuterVolumeSpecName: "logs") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.898616 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.898632 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.898646 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.908472 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx" (OuterVolumeSpecName: "kube-api-access-spxhx") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "kube-api-access-spxhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.928450 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts" (OuterVolumeSpecName: "scripts") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.991587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.992689 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data" (OuterVolumeSpecName: "config-data") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001881 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001938 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001955 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001967 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.067136 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.086262 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.103811 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.103864 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.282545 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.493759 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82508003-60c8-463b-92a9-bc9521fcfa03","Type":"ContainerStarted","Data":"d1e04049c4842166c6044361f7384530e87c43b6de980410a3f541d60c5053b9"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.497635 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerDied","Data":"3f6e1720fbdfa450cb84e7986470398e83ff14833f01c282921516d94399a109"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.497661 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.497736 4979 scope.go:117] "RemoveContainer" containerID="87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.500048 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerDied","Data":"0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.500066 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.506354 4979 generic.go:334] "Generic (PLEG): container finished" podID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerID="af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d" exitCode=0 Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.506435 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.506458 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerDied","Data":"af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.513864 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2b9a35db-944b-404f-8936-55d7bf448619" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.521881 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.528929 4979 scope.go:117] "RemoveContainer" containerID="7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.551119 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.564089 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.568764 4979 scope.go:117] "RemoveContainer" containerID="079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.573309 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.580821 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.592753 4979 scope.go:117] "RemoveContainer" containerID="28fa5fdce3759a70252b84e9d2a3128dd1ea647aeca78f30af0e925e772a5b64" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615235 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615423 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615504 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615559 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.617330 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.621317 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.621518 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk" (OuterVolumeSpecName: "kube-api-access-x9tzk") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "kube-api-access-x9tzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.622308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718919 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718969 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718979 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718988 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.085958 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9a35db-944b-404f-8936-55d7bf448619" path="/var/lib/kubelet/pods/2b9a35db-944b-404f-8936-55d7bf448619/volumes" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.086922 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" path="/var/lib/kubelet/pods/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8/volumes" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.087739 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" path="/var/lib/kubelet/pods/fde9bde2-8262-41c5-b037-d2d4a44575f7/volumes" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.525915 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.537156 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2b9a35db-944b-404f-8936-55d7bf448619" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" Jan 30 22:03:37 crc kubenswrapper[4979]: I0130 22:03:37.561318 4979 generic.go:334] "Generic (PLEG): container finished" podID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerID="dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb" exitCode=0 Jan 30 22:03:37 crc kubenswrapper[4979]: I0130 22:03:37.561596 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerDied","Data":"dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb"} Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.009831 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.103107 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104011 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104170 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104204 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104237 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104935 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.113533 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts" (OuterVolumeSpecName: "scripts") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.113662 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.124287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb" (OuterVolumeSpecName: "kube-api-access-7trcb") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "kube-api-access-7trcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.170466 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209585 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209637 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209652 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209663 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.262418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data" (OuterVolumeSpecName: "config-data") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.311744 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.576373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerDied","Data":"9e233c467b56b274cf91a0fd383468a12ee48c944ec900a8f2ba3fafe0a3e4a7"} Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.576470 4979 scope.go:117] "RemoveContainer" containerID="af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.576483 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.615256 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.625090 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.647647 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649069 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649188 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649296 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649428 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649527 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649596 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649672 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649741 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649828 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="init" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649899 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="init" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.650160 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650257 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650582 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650673 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650762 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650842 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650925 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.652482 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.657867 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.663975 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719532 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719560 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719581 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.823935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824122 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824221 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824437 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.836525 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.836604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.836786 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.837289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.843728 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.981527 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.093825 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" path="/var/lib/kubelet/pods/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0/volumes" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.687742 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.691608 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.697479 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.699990 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.702679 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.711988 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752558 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752640 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753156 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753264 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753383 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.855801 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856246 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856270 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856963 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.857536 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.857612 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.857899 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.863453 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.864139 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.867705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.875743 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.876330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.876676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:40 crc kubenswrapper[4979]: I0130 22:03:40.019911 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:40 crc kubenswrapper[4979]: I0130 22:03:40.171350 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.737874 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.738681 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" containerID="cri-o://d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.739242 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" containerID="cri-o://8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.739284 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" containerID="cri-o://379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.739492 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" containerID="cri-o://b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.756327 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.553809 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658288 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32" exitCode=0 Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658347 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f" exitCode=2 Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658359 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56" exitCode=0 Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32"} Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f"} Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658452 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56"} Jan 30 22:03:43 crc kubenswrapper[4979]: I0130 22:03:43.550164 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": dial tcp 10.217.0.163:3000: connect: connection refused" Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.019472 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.126441 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.126765 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-575496bbc6-tpmv9" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" containerID="cri-o://31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d" gracePeriod=30 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.126945 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-575496bbc6-tpmv9" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" containerID="cri-o://a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7" gracePeriod=30 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.703098 4979 scope.go:117] "RemoveContainer" containerID="dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb" Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.712120 4979 generic.go:334] "Generic (PLEG): container finished" podID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerID="a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7" exitCode=0 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.712249 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerDied","Data":"a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7"} Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.725244 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8" exitCode=0 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.725299 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.043521 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.125832 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.125904 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.125990 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.126063 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.126084 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.126815 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.127408 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.127452 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.127706 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.128192 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.128209 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.132651 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm" (OuterVolumeSpecName: "kube-api-access-9fnwm") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "kube-api-access-9fnwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.157280 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts" (OuterVolumeSpecName: "scripts") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.167320 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.227975 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.232999 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.233040 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.233056 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.233066 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.252633 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data" (OuterVolumeSpecName: "config-data") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.294591 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: W0130 22:03:47.306995 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21dfd874_e50d_4e61_a634_9f47ee92ff4f.slice/crio-7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed WatchSource:0}: Error finding container 7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed: Status 404 returned error can't find the container with id 7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.335346 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.559930 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.746364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82508003-60c8-463b-92a9-bc9521fcfa03","Type":"ContainerStarted","Data":"6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.749521 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerStarted","Data":"b64735411ca3cd7394e31868ccdaa7a77e584aec6259c66bd68d292da88aa3c5"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.765736 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.276291577 podStartE2EDuration="14.765714818s" podCreationTimestamp="2026-01-30 22:03:33 +0000 UTC" firstStartedPulling="2026-01-30 22:03:34.279272995 +0000 UTC m=+1410.240520028" lastFinishedPulling="2026-01-30 22:03:46.768696236 +0000 UTC m=+1422.729943269" observedRunningTime="2026-01-30 22:03:47.764113195 +0000 UTC m=+1423.725360238" watchObservedRunningTime="2026-01-30 22:03:47.765714818 +0000 UTC m=+1423.726961851" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.777923 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerStarted","Data":"7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.794577 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"6dbed89dcb99abab4522a3860a00ee5c7bea5cb37a875572e8e74067b72a1d9c"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.794685 4979 scope.go:117] "RemoveContainer" containerID="b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.794884 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.852692 4979 scope.go:117] "RemoveContainer" containerID="379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.874289 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.889937 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.897978 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898452 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898475 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898489 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898518 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898525 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898532 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898545 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898551 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898745 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898762 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898771 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898781 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.900512 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.907663 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.916736 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.922212 4979 scope.go:117] "RemoveContainer" containerID="8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.934842 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977375 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977468 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977769 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.000922 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.010339 4979 scope.go:117] "RemoveContainer" containerID="d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56" Jan 30 22:03:48 crc kubenswrapper[4979]: E0130 22:03:48.011224 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-k7j6x log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="be55c985-e7f8-499c-9ae4-3b96b20d1847" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.079904 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.079975 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080080 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080224 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080257 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080280 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080341 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080602 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.084306 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.086595 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.087249 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.088199 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.089900 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.103465 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.822533 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.823419 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" containerID="cri-o://87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" gracePeriod=30 Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.823557 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" containerID="cri-o://14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" gracePeriod=30 Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832703 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerStarted","Data":"6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7"} Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832773 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerStarted","Data":"b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112"} Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832909 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832951 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.842447 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerStarted","Data":"3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714"} Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.846193 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.862276 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podStartSLOduration=9.862258749 podStartE2EDuration="9.862258749s" podCreationTimestamp="2026-01-30 22:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:48.861275423 +0000 UTC m=+1424.822522456" watchObservedRunningTime="2026-01-30 22:03:48.862258749 +0000 UTC m=+1424.823505782" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.868783 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004138 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004277 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004408 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004567 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004620 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004656 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004693 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004750 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004971 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.005191 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.005210 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.012525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x" (OuterVolumeSpecName: "kube-api-access-k7j6x") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "kube-api-access-k7j6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.027283 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data" (OuterVolumeSpecName: "config-data") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.027811 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts" (OuterVolumeSpecName: "scripts") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.030164 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.030281 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.083344 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" path="/var/lib/kubelet/pods/ed53d4b7-eca6-4720-95ca-82db55e50fe7/volumes" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107543 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107584 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107595 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107605 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107614 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.903587 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerStarted","Data":"998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e"} Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.907044 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3b83faf-96cc-4787-814f-774416ea9811" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" exitCode=143 Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.907149 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerDied","Data":"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d"} Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.907192 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.942837 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.942807511 podStartE2EDuration="11.942807511s" podCreationTimestamp="2026-01-30 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:49.934164529 +0000 UTC m=+1425.895411572" watchObservedRunningTime="2026-01-30 22:03:49.942807511 +0000 UTC m=+1425.904054544" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.983517 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.991636 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.028111 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.031153 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.038405 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.039543 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.068608 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.163517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.164939 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165093 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165211 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165297 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165413 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165502 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267480 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267559 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267638 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267715 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267751 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267808 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.268338 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.268674 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.276303 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.276927 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.277582 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.282046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.310416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.360428 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.931373 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: W0130 22:03:50.934146 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91a73a79_d17b_4370_a554_acccc33344ba.slice/crio-05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867 WatchSource:0}: Error finding container 05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867: Status 404 returned error can't find the container with id 05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867 Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.082158 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be55c985-e7f8-499c-9ae4-3b96b20d1847" path="/var/lib/kubelet/pods/be55c985-e7f8-499c-9ae4-3b96b20d1847/volumes" Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.832759 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.833724 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" containerID="cri-o://7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53" gracePeriod=30 Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.834303 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" containerID="cri-o://24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6" gracePeriod=30 Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.027507 4979 generic.go:334] "Generic (PLEG): container finished" podID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerID="31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d" exitCode=0 Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.028125 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerDied","Data":"31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d"} Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.032330 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867"} Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.671986 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772180 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772401 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772565 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772630 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772675 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.782817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm" (OuterVolumeSpecName: "kube-api-access-4qjbm") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "kube-api-access-4qjbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.787675 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.848261 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config" (OuterVolumeSpecName: "config") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.854701 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.861551 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874366 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874447 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874543 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874582 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874612 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874659 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875647 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875671 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875680 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875689 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.878806 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts" (OuterVolumeSpecName: "scripts") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.891594 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs" (OuterVolumeSpecName: "logs") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.895346 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s" (OuterVolumeSpecName: "kube-api-access-hj67s") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "kube-api-access-hj67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.906558 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.906724 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.948515 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data" (OuterVolumeSpecName: "config-data") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.949201 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.975946 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976022 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976328 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976362 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976374 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976395 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976414 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976423 4979 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976433 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.981804 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.996661 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.007675 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.042111 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.047994 4979 generic.go:334] "Generic (PLEG): container finished" podID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerID="7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53" exitCode=143 Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.048106 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerDied","Data":"7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.050552 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerDied","Data":"d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.050637 4979 scope.go:117] "RemoveContainer" containerID="a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.050862 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.059235 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063093 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3b83faf-96cc-4787-814f-774416ea9811" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" exitCode=0 Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063143 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerDied","Data":"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063179 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerDied","Data":"3e810c936e02f2844b80a87456dccb9adbb5f44faaa30ddef373326002018cd3"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063228 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.079817 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.079859 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.079871 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.089537 4979 scope.go:117] "RemoveContainer" containerID="31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.138994 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.146766 4979 scope.go:117] "RemoveContainer" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.169300 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.178466 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.184315 4979 scope.go:117] "RemoveContainer" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.193495 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207189 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207831 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207853 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207879 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207887 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207909 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207920 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207943 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207951 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208194 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208209 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208221 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208238 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.209521 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.212454 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.212773 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.234023 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.242077 4979 scope.go:117] "RemoveContainer" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.247880 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd\": container with ID starting with 14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd not found: ID does not exist" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.247943 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd"} err="failed to get container status \"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd\": rpc error: code = NotFound desc = could not find container \"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd\": container with ID starting with 14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd not found: ID does not exist" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.247980 4979 scope.go:117] "RemoveContainer" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.251281 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d\": container with ID starting with 87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d not found: ID does not exist" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.251337 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d"} err="failed to get container status \"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d\": rpc error: code = NotFound desc = could not find container \"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d\": container with ID starting with 87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d not found: ID does not exist" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385489 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385764 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385833 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385875 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.386152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.386251 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.386276 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.487879 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488358 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488905 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488952 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488980 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488997 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.489352 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.490273 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.490337 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.494534 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.498754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.499763 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.501315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.509844 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.533998 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.549009 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.982650 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.077869 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637"} Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.077914 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd"} Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.139651 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.272159 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.037494 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.039640 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.089348 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" path="/var/lib/kubelet/pods/ba4b7345-9c9c-46e9-ac9a-d84093867012/volumes" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.090762 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b83faf-96cc-4787-814f-774416ea9811" path="/var/lib/kubelet/pods/c3b83faf-96cc-4787-814f-774416ea9811/volumes" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.102477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerStarted","Data":"2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8"} Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.102546 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerStarted","Data":"1ef7dfba2654b435b80b29127f1c9700a1f54fff7b56b29307a2ed4beab2ff4b"} Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.471172 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.472546 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.487523 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.543906 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.544234 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.592948 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.594542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.608518 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.610189 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.618088 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.620837 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.645095 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647297 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647332 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647383 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647446 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.656500 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.683438 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757097 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757194 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757219 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757317 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.758486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.794219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.800396 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.801876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.807082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.814169 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.843108 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.844612 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.851443 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.859687 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.859984 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860129 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860285 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860384 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860463 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.861534 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.879541 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.892180 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.982872 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.983065 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.983171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.983322 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.984539 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.001195 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.018132 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.021627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.022890 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.025980 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.033208 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.050869 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.053513 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.064123 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.077715 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.100071 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.100285 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.149768 4979 generic.go:334] "Generic (PLEG): container finished" podID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerID="24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6" exitCode=0 Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.149897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerDied","Data":"24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6"} Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.173427 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerStarted","Data":"aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635"} Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.208234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.208341 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.210309 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.210278575 podStartE2EDuration="3.210278575s" podCreationTimestamp="2026-01-30 22:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:56.206625228 +0000 UTC m=+1432.167872261" watchObservedRunningTime="2026-01-30 22:03:56.210278575 +0000 UTC m=+1432.171525598" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.217597 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.236216 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.306822 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.311624 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.380087 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.412900 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.412982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413021 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413144 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413189 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413259 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413353 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413398 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.416555 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs" (OuterVolumeSpecName: "logs") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.418559 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.420474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd" (OuterVolumeSpecName: "kube-api-access-htdgd") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "kube-api-access-htdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.422377 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts" (OuterVolumeSpecName: "scripts") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.427294 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516743 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516777 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516786 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516795 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516807 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.526758 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.579800 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data" (OuterVolumeSpecName: "config-data") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.620217 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.620704 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.629277 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.642296 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.722748 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.722784 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.903300 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:03:57 crc kubenswrapper[4979]: W0130 22:03:57.088717 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a8a7dfa_7a48_4b28_b2c1_22ae610f004a.slice/crio-df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951 WatchSource:0}: Error finding container df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951: Status 404 returned error can't find the container with id df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.092152 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.096526 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.104721 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:03:57 crc kubenswrapper[4979]: W0130 22:03:57.108115 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb76b95_4c2d_478d_b9d9_e6e182859ccd.slice/crio-20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351 WatchSource:0}: Error finding container 20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351: Status 404 returned error can't find the container with id 20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.230552 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.259048 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:03:57 crc kubenswrapper[4979]: W0130 22:03:57.259242 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabec2c46_a984_4314_88c5_d50d20ef7f8d.slice/crio-638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc WatchSource:0}: Error finding container 638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc: Status 404 returned error can't find the container with id 638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.289709 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.289913 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" containerID="cri-o://b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.289998 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.290081 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" containerID="cri-o://07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.290150 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" containerID="cri-o://c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.290270 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" containerID="cri-o://8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.299388 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qr8n5" event={"ID":"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e","Type":"ContainerStarted","Data":"5287613e36eb65b9ace85e182d98569185f491a0c8401f643ff7f5d20d7ff1a1"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.307283 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jjtrg" event={"ID":"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a","Type":"ContainerStarted","Data":"df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.313797 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-m57kd" event={"ID":"bd648327-e40d-4f17-9366-1773fa95f47a","Type":"ContainerStarted","Data":"04b227162d1780e1e9d4e54a32ca21d9c900228e88804dd47aed9db864e05510"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.320788 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.838694013 podStartE2EDuration="8.32077001s" podCreationTimestamp="2026-01-30 22:03:49 +0000 UTC" firstStartedPulling="2026-01-30 22:03:50.937209343 +0000 UTC m=+1426.898456376" lastFinishedPulling="2026-01-30 22:03:56.41928534 +0000 UTC m=+1432.380532373" observedRunningTime="2026-01-30 22:03:57.318935871 +0000 UTC m=+1433.280182904" watchObservedRunningTime="2026-01-30 22:03:57.32077001 +0000 UTC m=+1433.282017033" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.334873 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerDied","Data":"deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.334940 4979 scope.go:117] "RemoveContainer" containerID="24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.335045 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.340176 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-drkzw" event={"ID":"adb76b95-4c2d-478d-b9d9-e6e182859ccd","Type":"ContainerStarted","Data":"20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.354725 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qr8n5" podStartSLOduration=2.35470176 podStartE2EDuration="2.35470176s" podCreationTimestamp="2026-01-30 22:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:57.34577304 +0000 UTC m=+1433.307020083" watchObservedRunningTime="2026-01-30 22:03:57.35470176 +0000 UTC m=+1433.315948793" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.404024 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.426494 4979 scope.go:117] "RemoveContainer" containerID="7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.442418 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.451724 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: E0130 22:03:57.452351 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452372 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" Jan 30 22:03:57 crc kubenswrapper[4979]: E0130 22:03:57.452406 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452413 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452627 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452639 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.453850 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.458757 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.459088 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.486119 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550425 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550578 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550607 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550843 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.551433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.551719 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.655685 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656724 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656776 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.657676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.657935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.658276 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.660439 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.687359 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.690565 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.692640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.694118 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.727656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.760168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.801685 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.350336 4979 generic.go:334] "Generic (PLEG): container finished" podID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerID="d40ebbabe3d8f2995f627a1ae83a4f0a8052321d11e2329aba49ee99c9ce1294" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.350419 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-016f-account-create-update-brzlt" event={"ID":"abec2c46-a984-4314-88c5-d50d20ef7f8d","Type":"ContainerDied","Data":"d40ebbabe3d8f2995f627a1ae83a4f0a8052321d11e2329aba49ee99c9ce1294"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.350453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-016f-account-create-update-brzlt" event={"ID":"abec2c46-a984-4314-88c5-d50d20ef7f8d","Type":"ContainerStarted","Data":"638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.352966 4979 generic.go:334] "Generic (PLEG): container finished" podID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerID="4346269c3467fb9983ba22a3da499f523fe4b5d9072377bdb3c9eadf809fe8ff" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.353024 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jjtrg" event={"ID":"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a","Type":"ContainerDied","Data":"4346269c3467fb9983ba22a3da499f523fe4b5d9072377bdb3c9eadf809fe8ff"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.354862 4979 generic.go:334] "Generic (PLEG): container finished" podID="bd648327-e40d-4f17-9366-1773fa95f47a" containerID="78e6994e836809eb6c4147c73b39f8c34653cb31054d04a758e600e5a045351d" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.354925 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-m57kd" event={"ID":"bd648327-e40d-4f17-9366-1773fa95f47a","Type":"ContainerDied","Data":"78e6994e836809eb6c4147c73b39f8c34653cb31054d04a758e600e5a045351d"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.357937 4979 generic.go:334] "Generic (PLEG): container finished" podID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerID="65f7df0a5f220ddf8b419657c4d7771409b9a8c3c511a14b07fabfbb8e20fede" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.357974 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-drkzw" event={"ID":"adb76b95-4c2d-478d-b9d9-e6e182859ccd","Type":"ContainerDied","Data":"65f7df0a5f220ddf8b419657c4d7771409b9a8c3c511a14b07fabfbb8e20fede"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360725 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360747 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" exitCode=2 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360758 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360795 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360843 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360867 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.362670 4979 generic.go:334] "Generic (PLEG): container finished" podID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerID="ce15c22300306383eb564954b64ad58a13fe8c8c246e3d682e1063ba2ed2a496" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.362816 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qr8n5" event={"ID":"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e","Type":"ContainerDied","Data":"ce15c22300306383eb564954b64ad58a13fe8c8c246e3d682e1063ba2ed2a496"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.364859 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerID="d6d25ae31ed5e6d9c7cb7e6adcce8605ff98681415f720f118a7c85b8f2468e0" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.364888 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fgz9b" event={"ID":"bc0c5054-9597-4b94-a1d6-1f424c1d6de4","Type":"ContainerDied","Data":"d6d25ae31ed5e6d9c7cb7e6adcce8605ff98681415f720f118a7c85b8f2468e0"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.364920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fgz9b" event={"ID":"bc0c5054-9597-4b94-a1d6-1f424c1d6de4","Type":"ContainerStarted","Data":"6e63b7e0b8f850b8f49982133a6589249d41e457d00781d4b3e30f84278b613a"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.553631 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:58 crc kubenswrapper[4979]: W0130 22:03:58.561537 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec2e945_509e_4cbb_9988_9f6cc840cd62.slice/crio-990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd WatchSource:0}: Error finding container 990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd: Status 404 returned error can't find the container with id 990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.082607 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" path="/var/lib/kubelet/pods/6e002e48-1108-41f0-a1de-5a6b89d9e534/volumes" Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.402327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerStarted","Data":"3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45"} Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.402794 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerStarted","Data":"990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd"} Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.910574 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.034842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.038409 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.049771 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" (UID: "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.072972 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r" (OuterVolumeSpecName: "kube-api-access-xk22r") pod "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" (UID: "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e"). InnerVolumeSpecName "kube-api-access-xk22r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.162648 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.162686 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.290666 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.305459 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.324001 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.332086 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.350138 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365733 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"abec2c46-a984-4314-88c5-d50d20ef7f8d\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365781 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365830 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365851 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"bd648327-e40d-4f17-9366-1773fa95f47a\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365906 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365936 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"abec2c46-a984-4314-88c5-d50d20ef7f8d\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365989 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"bd648327-e40d-4f17-9366-1773fa95f47a\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366042 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366070 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366112 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366800 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc0c5054-9597-4b94-a1d6-1f424c1d6de4" (UID: "bc0c5054-9597-4b94-a1d6-1f424c1d6de4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.367207 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abec2c46-a984-4314-88c5-d50d20ef7f8d" (UID: "abec2c46-a984-4314-88c5-d50d20ef7f8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.367646 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" (UID: "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.367737 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adb76b95-4c2d-478d-b9d9-e6e182859ccd" (UID: "adb76b95-4c2d-478d-b9d9-e6e182859ccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.368270 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd648327-e40d-4f17-9366-1773fa95f47a" (UID: "bd648327-e40d-4f17-9366-1773fa95f47a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.373021 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq" (OuterVolumeSpecName: "kube-api-access-fpsgq") pod "adb76b95-4c2d-478d-b9d9-e6e182859ccd" (UID: "adb76b95-4c2d-478d-b9d9-e6e182859ccd"). InnerVolumeSpecName "kube-api-access-fpsgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.373709 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb" (OuterVolumeSpecName: "kube-api-access-g4zkb") pod "abec2c46-a984-4314-88c5-d50d20ef7f8d" (UID: "abec2c46-a984-4314-88c5-d50d20ef7f8d"). InnerVolumeSpecName "kube-api-access-g4zkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.373746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz" (OuterVolumeSpecName: "kube-api-access-jz5pz") pod "bc0c5054-9597-4b94-a1d6-1f424c1d6de4" (UID: "bc0c5054-9597-4b94-a1d6-1f424c1d6de4"). InnerVolumeSpecName "kube-api-access-jz5pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.378231 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl" (OuterVolumeSpecName: "kube-api-access-8zkhl") pod "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" (UID: "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a"). InnerVolumeSpecName "kube-api-access-8zkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.392520 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp" (OuterVolumeSpecName: "kube-api-access-vg9qp") pod "bd648327-e40d-4f17-9366-1773fa95f47a" (UID: "bd648327-e40d-4f17-9366-1773fa95f47a"). InnerVolumeSpecName "kube-api-access-vg9qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.417137 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qr8n5" event={"ID":"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e","Type":"ContainerDied","Data":"5287613e36eb65b9ace85e182d98569185f491a0c8401f643ff7f5d20d7ff1a1"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.417187 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287613e36eb65b9ace85e182d98569185f491a0c8401f643ff7f5d20d7ff1a1" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.417298 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.420383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fgz9b" event={"ID":"bc0c5054-9597-4b94-a1d6-1f424c1d6de4","Type":"ContainerDied","Data":"6e63b7e0b8f850b8f49982133a6589249d41e457d00781d4b3e30f84278b613a"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.420426 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e63b7e0b8f850b8f49982133a6589249d41e457d00781d4b3e30f84278b613a" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.420494 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.430060 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-016f-account-create-update-brzlt" event={"ID":"abec2c46-a984-4314-88c5-d50d20ef7f8d","Type":"ContainerDied","Data":"638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.430103 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.430180 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.433100 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jjtrg" event={"ID":"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a","Type":"ContainerDied","Data":"df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.433159 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.433888 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.439683 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-m57kd" event={"ID":"bd648327-e40d-4f17-9366-1773fa95f47a","Type":"ContainerDied","Data":"04b227162d1780e1e9d4e54a32ca21d9c900228e88804dd47aed9db864e05510"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.439726 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b227162d1780e1e9d4e54a32ca21d9c900228e88804dd47aed9db864e05510" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.439790 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.451724 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerStarted","Data":"10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.456563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-drkzw" event={"ID":"adb76b95-4c2d-478d-b9d9-e6e182859ccd","Type":"ContainerDied","Data":"20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.456593 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.456611 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468223 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468259 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468271 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468283 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468293 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468302 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468314 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468323 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468336 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468345 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.491708 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.491689419 podStartE2EDuration="3.491689419s" podCreationTimestamp="2026-01-30 22:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:00.487685422 +0000 UTC m=+1436.448932455" watchObservedRunningTime="2026-01-30 22:04:00.491689419 +0000 UTC m=+1436.452936442" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.374411 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470574 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" exitCode=0 Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470699 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470790 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1"} Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470842 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867"} Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470872 4979 scope.go:117] "RemoveContainer" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.496777 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497076 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497195 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497344 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497438 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497474 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.499852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.500346 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.501127 4979 scope.go:117] "RemoveContainer" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.506734 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts" (OuterVolumeSpecName: "scripts") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.507119 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp" (OuterVolumeSpecName: "kube-api-access-msbfp") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "kube-api-access-msbfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.537399 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.575608 4979 scope.go:117] "RemoveContainer" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600177 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600224 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600242 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600254 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600265 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.610499 4979 scope.go:117] "RemoveContainer" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.621932 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data" (OuterVolumeSpecName: "config-data") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.633214 4979 scope.go:117] "RemoveContainer" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.633864 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468\": container with ID starting with 07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468 not found: ID does not exist" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.633911 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468"} err="failed to get container status \"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468\": rpc error: code = NotFound desc = could not find container \"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468\": container with ID starting with 07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.633944 4979 scope.go:117] "RemoveContainer" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.634278 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637\": container with ID starting with 8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637 not found: ID does not exist" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634314 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637"} err="failed to get container status \"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637\": rpc error: code = NotFound desc = could not find container \"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637\": container with ID starting with 8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634328 4979 scope.go:117] "RemoveContainer" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.634694 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd\": container with ID starting with c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd not found: ID does not exist" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634725 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd"} err="failed to get container status \"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd\": rpc error: code = NotFound desc = could not find container \"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd\": container with ID starting with c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634743 4979 scope.go:117] "RemoveContainer" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.635072 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1\": container with ID starting with b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1 not found: ID does not exist" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.635097 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1"} err="failed to get container status \"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1\": rpc error: code = NotFound desc = could not find container \"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1\": container with ID starting with b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.647345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.703819 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.703864 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.817592 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.826858 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.841900 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842419 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842436 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842454 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842461 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842469 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842476 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842489 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842497 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842507 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842514 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842525 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842533 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842559 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842567 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842585 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842592 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842599 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842607 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842630 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842638 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842829 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842842 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842850 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842867 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842879 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842885 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842892 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842902 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842912 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842922 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.844842 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.847700 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.848089 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.898735 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.011841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.011929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012184 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012224 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012353 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113838 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113885 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113907 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.114015 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.114054 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.115953 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.116339 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.120542 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.122235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.124478 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.134171 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.139444 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.191802 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.342135 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.681471 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:02 crc kubenswrapper[4979]: W0130 22:04:02.684206 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29bcacff_6888_44ea_aea7_79eeedfd2e5c.slice/crio-25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b WatchSource:0}: Error finding container 25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b: Status 404 returned error can't find the container with id 25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.082072 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a73a79-d17b-4370-a554-acccc33344ba" path="/var/lib/kubelet/pods/91a73a79-d17b-4370-a554-acccc33344ba/volumes" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.497234 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b"} Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.549533 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.549595 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.591051 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.609821 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.519857 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2"} Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.520283 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.520298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23"} Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.520315 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:04:05 crc kubenswrapper[4979]: I0130 22:04:05.532222 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87"} Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.350580 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.352802 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.360945 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.372819 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.373196 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q2lvs" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.373652 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418690 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418861 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521632 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521727 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521803 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.530275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.530302 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.530875 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.541227 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.541261 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.546428 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.673562 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.050931 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.125071 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.328809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.553798 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0"} Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.554374 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.553999 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" containerID="cri-o://75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.553964 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" containerID="cri-o://3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.554023 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" containerID="cri-o://72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.554326 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" containerID="cri-o://94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.556013 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerStarted","Data":"fe92592e0879b96bdce141b30cbe05a1c3b99dc1723f96ea5d0aefbbdc1a1b6d"} Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.581331 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.06106256 podStartE2EDuration="6.581309588s" podCreationTimestamp="2026-01-30 22:04:01 +0000 UTC" firstStartedPulling="2026-01-30 22:04:02.687153535 +0000 UTC m=+1438.648400568" lastFinishedPulling="2026-01-30 22:04:07.207400563 +0000 UTC m=+1443.168647596" observedRunningTime="2026-01-30 22:04:07.573875618 +0000 UTC m=+1443.535122651" watchObservedRunningTime="2026-01-30 22:04:07.581309588 +0000 UTC m=+1443.542556621" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.802688 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.802814 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.839552 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.848869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567464 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87" exitCode=2 Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567919 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2" exitCode=0 Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567533 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87"} Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2"} Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.569002 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.569059 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:10 crc kubenswrapper[4979]: I0130 22:04:10.703551 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:10 crc kubenswrapper[4979]: I0130 22:04:10.703680 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:04:10 crc kubenswrapper[4979]: I0130 22:04:10.989817 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:11 crc kubenswrapper[4979]: I0130 22:04:11.618707 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23" exitCode=0 Jan 30 22:04:11 crc kubenswrapper[4979]: I0130 22:04:11.618942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23"} Jan 30 22:04:18 crc kubenswrapper[4979]: I0130 22:04:18.709604 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerStarted","Data":"ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9"} Jan 30 22:04:18 crc kubenswrapper[4979]: I0130 22:04:18.733450 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" podStartSLOduration=1.971037178 podStartE2EDuration="12.73341623s" podCreationTimestamp="2026-01-30 22:04:06 +0000 UTC" firstStartedPulling="2026-01-30 22:04:07.332411244 +0000 UTC m=+1443.293658277" lastFinishedPulling="2026-01-30 22:04:18.094790306 +0000 UTC m=+1454.056037329" observedRunningTime="2026-01-30 22:04:18.723102963 +0000 UTC m=+1454.684349996" watchObservedRunningTime="2026-01-30 22:04:18.73341623 +0000 UTC m=+1454.694663263" Jan 30 22:04:30 crc kubenswrapper[4979]: I0130 22:04:30.848926 4979 generic.go:334] "Generic (PLEG): container finished" podID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerID="ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9" exitCode=0 Jan 30 22:04:30 crc kubenswrapper[4979]: I0130 22:04:30.849057 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerDied","Data":"ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9"} Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.198928 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.239161 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.368165 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.368426 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.369493 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.369551 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.376629 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv" (OuterVolumeSpecName: "kube-api-access-jwtbv") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "kube-api-access-jwtbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.377433 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts" (OuterVolumeSpecName: "scripts") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.403931 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.408132 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data" (OuterVolumeSpecName: "config-data") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.500938 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.501026 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.501071 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.501090 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.873130 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerDied","Data":"fe92592e0879b96bdce141b30cbe05a1c3b99dc1723f96ea5d0aefbbdc1a1b6d"} Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.873180 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe92592e0879b96bdce141b30cbe05a1c3b99dc1723f96ea5d0aefbbdc1a1b6d" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.873295 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.016935 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:04:33 crc kubenswrapper[4979]: E0130 22:04:33.017575 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerName="nova-cell0-conductor-db-sync" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.017602 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerName="nova-cell0-conductor-db-sync" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.017859 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerName="nova-cell0-conductor-db-sync" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.018731 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.021592 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q2lvs" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.022979 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.034618 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.115973 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.116507 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.116545 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.218421 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.218498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.218533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.227672 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.228086 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.245607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.338417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.818667 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.886454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerStarted","Data":"9644ea1b50d881a5fc87efbeb25d5fe3195c9de5bf0f6fd1b1d5b2e65c2a5124"} Jan 30 22:04:34 crc kubenswrapper[4979]: I0130 22:04:34.900378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerStarted","Data":"383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb"} Jan 30 22:04:34 crc kubenswrapper[4979]: I0130 22:04:34.901080 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:34 crc kubenswrapper[4979]: I0130 22:04:34.926536 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.9264945620000002 podStartE2EDuration="2.926494562s" podCreationTimestamp="2026-01-30 22:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:34.921869458 +0000 UTC m=+1470.883116521" watchObservedRunningTime="2026-01-30 22:04:34.926494562 +0000 UTC m=+1470.887741595" Jan 30 22:04:37 crc kubenswrapper[4979]: I0130 22:04:37.937940 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0" exitCode=137 Jan 30 22:04:37 crc kubenswrapper[4979]: I0130 22:04:37.938775 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0"} Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.569428 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657580 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657727 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657808 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657897 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657970 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.658001 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.659101 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.659270 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.666303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts" (OuterVolumeSpecName: "scripts") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.667080 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8" (OuterVolumeSpecName: "kube-api-access-dl9n8") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "kube-api-access-dl9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.697080 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.737943 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.760709 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.760931 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761060 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761154 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761250 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761324 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.775850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data" (OuterVolumeSpecName: "config-data") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.864150 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.952108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b"} Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.952241 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.953408 4979 scope.go:117] "RemoveContainer" containerID="3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.982135 4979 scope.go:117] "RemoveContainer" containerID="75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.005800 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.014677 4979 scope.go:117] "RemoveContainer" containerID="72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.062383 4979 scope.go:117] "RemoveContainer" containerID="94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.088724 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.088780 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089231 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089249 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089282 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089309 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089324 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089331 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089356 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089383 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089639 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089660 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089671 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089681 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.092100 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.094879 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.095150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.102292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.170847 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171197 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171409 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171498 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171544 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171667 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274117 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274193 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274231 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274262 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274442 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.275008 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.275354 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.281301 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.282712 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.284710 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.295300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.306395 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.415951 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.870737 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.964855 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"01c6464a8f040a12abc8ff599cbfa55d11072c8f6eee4cfc9c902ea1c0c52c3a"} Jan 30 22:04:40 crc kubenswrapper[4979]: I0130 22:04:40.978250 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428"} Jan 30 22:04:41 crc kubenswrapper[4979]: I0130 22:04:41.083052 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" path="/var/lib/kubelet/pods/29bcacff-6888-44ea-aea7-79eeedfd2e5c/volumes" Jan 30 22:04:42 crc kubenswrapper[4979]: I0130 22:04:42.020371 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d"} Jan 30 22:04:43 crc kubenswrapper[4979]: I0130 22:04:43.041132 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393"} Jan 30 22:04:43 crc kubenswrapper[4979]: I0130 22:04:43.375718 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.181829 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.184069 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.190722 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.191176 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.202987 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289685 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289767 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392603 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392716 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392750 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.401615 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.402234 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.432741 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.438511 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.461220 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.470564 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.474437 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.476814 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.478479 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.484104 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.491490 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.507259 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.508800 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.511001 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.511595 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.522723 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.534784 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599586 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599656 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599701 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599794 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599874 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599896 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599924 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599959 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.670183 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.692810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.705408 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706018 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706150 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706163 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706282 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706428 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706482 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706786 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706832 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706908 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.714006 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.735480 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.746498 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.760787 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.761892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.766640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.767484 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.768958 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.774168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.782461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.784372 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.825928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826255 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826570 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826860 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.841332 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.841965 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.845326 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.857057 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929345 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929472 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929552 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929599 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929730 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929824 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929870 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929892 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929921 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.931627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.934686 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.937497 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.940239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.953624 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033791 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033873 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033940 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.034039 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.035186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.035786 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.037006 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.037966 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.050384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.070246 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.117719 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.188986 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.328266 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.329686 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.340139 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.340464 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.345077 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.446768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.447407 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.447466 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.447571 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550728 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.560349 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.560553 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.561069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.586289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.650887 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.054252 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.099160 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.108859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerStarted","Data":"418ba1031b4d4e3f1080f6d157787ad41890fff919486dd4b096f4ae99738787"} Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.114117 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.116130 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerStarted","Data":"f8e2db0bc8aced80b6b6b46a1c0ed2401ba1be3a5bf03e9af3531ffe48935419"} Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.124002 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.136381 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:04:47 crc kubenswrapper[4979]: W0130 22:04:47.148696 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcf14a9_0e1b_4d80_9a4f_124eb0297975.slice/crio-2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf WatchSource:0}: Error finding container 2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf: Status 404 returned error can't find the container with id 2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.149128 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.222022 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.165433 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerStarted","Data":"03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.166422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerStarted","Data":"8439fa81627ed0d7327a33566a06586c473b5bde902c6eee485f6d5ed225dc1e"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.199586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.201778 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.207759 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-gfv78" podStartSLOduration=3.207722071 podStartE2EDuration="3.207722071s" podCreationTimestamp="2026-01-30 22:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:48.19163266 +0000 UTC m=+1484.152879713" watchObservedRunningTime="2026-01-30 22:04:48.207722071 +0000 UTC m=+1484.168969104" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.216645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerStarted","Data":"2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.228553 4979 generic.go:334] "Generic (PLEG): container finished" podID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerID="013d174f6848cb2abad2b004411d67e5b0bf2bc2e07bdd6263bb0777501bbd65" exitCode=0 Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.228638 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerDied","Data":"013d174f6848cb2abad2b004411d67e5b0bf2bc2e07bdd6263bb0777501bbd65"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.228675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerStarted","Data":"dad5ecae947304a11e938cd18a6af2bcf48628237b04604b4febaa6b29c4e97a"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.232939 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerStarted","Data":"6e35d2aa80751cf03f328d82e8a9f5b326aa2261f7ea467cd2e725d52fe418c8"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.308075 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.390299609 podStartE2EDuration="9.308046741s" podCreationTimestamp="2026-01-30 22:04:39 +0000 UTC" firstStartedPulling="2026-01-30 22:04:39.884138858 +0000 UTC m=+1475.845385891" lastFinishedPulling="2026-01-30 22:04:47.80188599 +0000 UTC m=+1483.763133023" observedRunningTime="2026-01-30 22:04:48.270698439 +0000 UTC m=+1484.231945562" watchObservedRunningTime="2026-01-30 22:04:48.308046741 +0000 UTC m=+1484.269293774" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.317066 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerStarted","Data":"34e4ecb3720c18b8e6cdeb76dc056d9810bc42be63e75ebe0aec06bf1bbc4605"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.320063 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerStarted","Data":"5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.955829 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pqfg4" podStartSLOduration=4.955792589 podStartE2EDuration="4.955792589s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:48.366975241 +0000 UTC m=+1484.328222264" watchObservedRunningTime="2026-01-30 22:04:48.955792589 +0000 UTC m=+1484.917039622" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.962388 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.970261 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:49 crc kubenswrapper[4979]: I0130 22:04:49.375704 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerStarted","Data":"b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457"} Jan 30 22:04:49 crc kubenswrapper[4979]: I0130 22:04:49.406771 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" podStartSLOduration=5.406742049 podStartE2EDuration="5.406742049s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:49.399150495 +0000 UTC m=+1485.360397528" watchObservedRunningTime="2026-01-30 22:04:49.406742049 +0000 UTC m=+1485.367989082" Jan 30 22:04:50 crc kubenswrapper[4979]: I0130 22:04:50.189992 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.190332 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.269612 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.269908 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" containerID="cri-o://cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3" gracePeriod=10 Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.441537 4979 generic.go:334] "Generic (PLEG): container finished" podID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerID="cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3" exitCode=0 Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.441601 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerDied","Data":"cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3"} Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.830114 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937461 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937550 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937655 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937872 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937996 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.971917 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq" (OuterVolumeSpecName: "kube-api-access-sqtbq") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "kube-api-access-sqtbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.044931 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.101362 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.104971 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.112382 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config" (OuterVolumeSpecName: "config") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.112779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.113432 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146934 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146970 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146983 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146991 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146999 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.461506 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerStarted","Data":"ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.463346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerStarted","Data":"4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.463443 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a" gracePeriod=30 Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.471394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerDied","Data":"bd0c08ab5da0f9972ab0ecfaa7d4a96b3e692f626faf2e99b754b19a6fd17552"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.471874 4979 scope.go:117] "RemoveContainer" containerID="cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.471730 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.480993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerStarted","Data":"697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.481070 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerStarted","Data":"c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.484196 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerStarted","Data":"fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.500268 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.01186695 podStartE2EDuration="12.500240652s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.080443256 +0000 UTC m=+1483.041690329" lastFinishedPulling="2026-01-30 22:04:55.568816998 +0000 UTC m=+1491.530064031" observedRunningTime="2026-01-30 22:04:56.491482577 +0000 UTC m=+1492.452729620" watchObservedRunningTime="2026-01-30 22:04:56.500240652 +0000 UTC m=+1492.461487695" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.526968 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.988133242 podStartE2EDuration="12.526940527s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.148555742 +0000 UTC m=+1483.109802775" lastFinishedPulling="2026-01-30 22:04:55.687363027 +0000 UTC m=+1491.648610060" observedRunningTime="2026-01-30 22:04:56.511284028 +0000 UTC m=+1492.472531061" watchObservedRunningTime="2026-01-30 22:04:56.526940527 +0000 UTC m=+1492.488187560" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.544907 4979 scope.go:117] "RemoveContainer" containerID="d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.547881 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.128573749 podStartE2EDuration="12.547854268s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.145173932 +0000 UTC m=+1483.106420965" lastFinishedPulling="2026-01-30 22:04:55.564454461 +0000 UTC m=+1491.525701484" observedRunningTime="2026-01-30 22:04:56.543089991 +0000 UTC m=+1492.504337024" watchObservedRunningTime="2026-01-30 22:04:56.547854268 +0000 UTC m=+1492.509101301" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.577479 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.585829 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.081492 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" path="/var/lib/kubelet/pods/058e90a8-7816-4982-96eb-0390f9f09ef5/volumes" Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.498945 4979 generic.go:334] "Generic (PLEG): container finished" podID="15e523da-837e-4af0-835b-55b1950fc487" containerID="5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef" exitCode=0 Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.499020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerDied","Data":"5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef"} Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.502155 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerStarted","Data":"b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c"} Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.502296 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" containerID="cri-o://ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55" gracePeriod=30 Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.502435 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" containerID="cri-o://b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c" gracePeriod=30 Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.554353 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.060475936 podStartE2EDuration="13.554311834s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.188496084 +0000 UTC m=+1483.149743117" lastFinishedPulling="2026-01-30 22:04:55.682331982 +0000 UTC m=+1491.643579015" observedRunningTime="2026-01-30 22:04:57.548119448 +0000 UTC m=+1493.509366481" watchObservedRunningTime="2026-01-30 22:04:57.554311834 +0000 UTC m=+1493.515558867" Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.522872 4979 generic.go:334] "Generic (PLEG): container finished" podID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerID="b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c" exitCode=0 Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.523501 4979 generic.go:334] "Generic (PLEG): container finished" podID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerID="ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55" exitCode=143 Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.523453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerDied","Data":"b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c"} Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.523663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerDied","Data":"ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55"} Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.002048 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.116953 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.117112 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.117178 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.117390 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.126309 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts" (OuterVolumeSpecName: "scripts") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.129232 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w" (OuterVolumeSpecName: "kube-api-access-9tw8w") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "kube-api-access-9tw8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.150156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data" (OuterVolumeSpecName: "config-data") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.151398 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.219959 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.220014 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.220049 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.220072 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.363380 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424425 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424471 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424536 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.425148 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs" (OuterVolumeSpecName: "logs") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.429356 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w" (OuterVolumeSpecName: "kube-api-access-sfr9w") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "kube-api-access-sfr9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.451849 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.452885 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data" (OuterVolumeSpecName: "config-data") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526875 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526926 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526940 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526949 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.533187 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerDied","Data":"2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf"} Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.533262 4979 scope.go:117] "RemoveContainer" containerID="b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.533431 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.543026 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerDied","Data":"f8e2db0bc8aced80b6b6b46a1c0ed2401ba1be3a5bf03e9af3531ffe48935419"} Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.543138 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e2db0bc8aced80b6b6b46a1c0ed2401ba1be3a5bf03e9af3531ffe48935419" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.543209 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.577093 4979 scope.go:117] "RemoveContainer" containerID="ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.585988 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.600178 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.614506 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.614976 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e523da-837e-4af0-835b-55b1950fc487" containerName="nova-manage" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.614996 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e523da-837e-4af0-835b-55b1950fc487" containerName="nova-manage" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615016 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615022 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615056 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615063 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615074 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615080 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615094 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="init" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615100 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="init" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615292 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615311 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615348 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e523da-837e-4af0-835b-55b1950fc487" containerName="nova-manage" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615357 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.616538 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.620984 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.621265 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.625679 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.714571 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.715297 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" containerID="cri-o://c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804" gracePeriod=30 Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.715383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" containerID="cri-o://697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738" gracePeriod=30 Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.730245 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.730530 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" containerID="cri-o://fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d" gracePeriod=30 Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733065 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733370 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733426 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733746 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.802869 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.803862 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-zljzj logs nova-metadata-tls-certs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-metadata-0" podUID="3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836258 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836371 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.837846 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.842923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.842999 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.843230 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.843651 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.856950 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.936391 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556086 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerID="697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738" exitCode=0 Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556133 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerID="c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804" exitCode=143 Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerDied","Data":"697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738"} Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556213 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556212 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerDied","Data":"c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804"} Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.567792 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.652983 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653137 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653478 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653621 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653731 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.654090 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs" (OuterVolumeSpecName: "logs") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.659787 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data" (OuterVolumeSpecName: "config-data") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.661287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.662311 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj" (OuterVolumeSpecName: "kube-api-access-zljzj") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "kube-api-access-zljzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.677299 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.756756 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757337 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757353 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757367 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757384 4979 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.087644 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" path="/var/lib/kubelet/pods/bfcf14a9-0e1b-4d80-9a4f-124eb0297975/volumes" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.143768 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.186883 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187123 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187356 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187778 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs" (OuterVolumeSpecName: "logs") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.188783 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.193587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6" (OuterVolumeSpecName: "kube-api-access-mwbx6") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "kube-api-access-mwbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.215458 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle podName:fe19f1e0-5b59-46b5-a88c-eb1600e144ca nodeName:}" failed. No retries permitted until 2026-01-30 22:05:01.715413106 +0000 UTC m=+1497.676660159 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca") : error deleting /var/lib/kubelet/pods/fe19f1e0-5b59-46b5-a88c-eb1600e144ca/volume-subpaths: remove /var/lib/kubelet/pods/fe19f1e0-5b59-46b5-a88c-eb1600e144ca/volume-subpaths: no such file or directory Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.220008 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data" (OuterVolumeSpecName: "config-data") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.290235 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.290277 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.402840 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62853806_2bda_4664_b5e7_cc1dc951f658.slice/crio-conmon-fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.571297 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.571264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerDied","Data":"6e35d2aa80751cf03f328d82e8a9f5b326aa2261f7ea467cd2e725d52fe418c8"} Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.573259 4979 scope.go:117] "RemoveContainer" containerID="697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.573798 4979 generic.go:334] "Generic (PLEG): container finished" podID="62853806-2bda-4664-b5e7-cc1dc951f658" containerID="fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d" exitCode=0 Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.573919 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.574092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerDied","Data":"fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d"} Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.616496 4979 scope.go:117] "RemoveContainer" containerID="c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.641578 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.668889 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.687567 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.688109 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688132 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.688148 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688155 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688452 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688487 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.690240 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.693583 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.693684 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.699241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.806662 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807018 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807092 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807169 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807255 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.813736 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913040 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913184 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913232 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913508 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.914126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.921308 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.921471 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.921695 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.944124 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.949402 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.953800 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.967414 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.969991 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.973258 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.977245 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015945 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015997 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.026852 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.039590 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.039656 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.118496 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.119301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.119470 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.119710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.120452 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.123705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.124649 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.140415 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.223121 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.290080 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.323075 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"62853806-2bda-4664-b5e7-cc1dc951f658\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.323263 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"62853806-2bda-4664-b5e7-cc1dc951f658\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.323453 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"62853806-2bda-4664-b5e7-cc1dc951f658\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.331361 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc" (OuterVolumeSpecName: "kube-api-access-9jnkc") pod "62853806-2bda-4664-b5e7-cc1dc951f658" (UID: "62853806-2bda-4664-b5e7-cc1dc951f658"). InnerVolumeSpecName "kube-api-access-9jnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.356262 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data" (OuterVolumeSpecName: "config-data") pod "62853806-2bda-4664-b5e7-cc1dc951f658" (UID: "62853806-2bda-4664-b5e7-cc1dc951f658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.358406 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62853806-2bda-4664-b5e7-cc1dc951f658" (UID: "62853806-2bda-4664-b5e7-cc1dc951f658"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.426860 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.426919 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.426930 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:02 crc kubenswrapper[4979]: W0130 22:05:02.546973 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb45ea9a1_6c1f_4719_8432_2add7fdef96d.slice/crio-cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a WatchSource:0}: Error finding container cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a: Status 404 returned error can't find the container with id cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.548877 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.589046 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerDied","Data":"34e4ecb3720c18b8e6cdeb76dc056d9810bc42be63e75ebe0aec06bf1bbc4605"} Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.589118 4979 scope.go:117] "RemoveContainer" containerID="fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.589069 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.590740 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerStarted","Data":"cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a"} Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.635589 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.663841 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.699522 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: E0130 22:05:02.700791 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.700814 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.701491 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.702997 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.706072 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.727393 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.733735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.733799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.733859 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.762883 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: W0130 22:05:02.763969 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd48f1e79_1816_4321_ba02_25d28d095a47.slice/crio-28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda WatchSource:0}: Error finding container 28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda: Status 404 returned error can't find the container with id 28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.836734 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.836824 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.836905 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.842880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.843782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.855356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.025780 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.085764 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" path="/var/lib/kubelet/pods/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7/volumes" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.086349 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" path="/var/lib/kubelet/pods/62853806-2bda-4664-b5e7-cc1dc951f658/volumes" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.087401 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" path="/var/lib/kubelet/pods/fe19f1e0-5b59-46b5-a88c-eb1600e144ca/volumes" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.531065 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:03 crc kubenswrapper[4979]: W0130 22:05:03.535615 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4df90142_0487_4f26_8fb8_4ea21cda53d5.slice/crio-7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7 WatchSource:0}: Error finding container 7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7: Status 404 returned error can't find the container with id 7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7 Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.603195 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerStarted","Data":"7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.605258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerStarted","Data":"356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.605292 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerStarted","Data":"57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.605305 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerStarted","Data":"28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.609347 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerStarted","Data":"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.609385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerStarted","Data":"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.641928 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.641903256 podStartE2EDuration="2.641903256s" podCreationTimestamp="2026-01-30 22:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:03.628446574 +0000 UTC m=+1499.589693627" watchObservedRunningTime="2026-01-30 22:05:03.641903256 +0000 UTC m=+1499.603150289" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.661804 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.661784958 podStartE2EDuration="2.661784958s" podCreationTimestamp="2026-01-30 22:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:03.656120207 +0000 UTC m=+1499.617367230" watchObservedRunningTime="2026-01-30 22:05:03.661784958 +0000 UTC m=+1499.623031991" Jan 30 22:05:04 crc kubenswrapper[4979]: I0130 22:05:04.644973 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerStarted","Data":"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed"} Jan 30 22:05:04 crc kubenswrapper[4979]: I0130 22:05:04.686554 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.686526664 podStartE2EDuration="2.686526664s" podCreationTimestamp="2026-01-30 22:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:04.676602108 +0000 UTC m=+1500.637849131" watchObservedRunningTime="2026-01-30 22:05:04.686526664 +0000 UTC m=+1500.647773687" Jan 30 22:05:06 crc kubenswrapper[4979]: I0130 22:05:06.671011 4979 generic.go:334] "Generic (PLEG): container finished" podID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerID="03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e" exitCode=0 Jan 30 22:05:06 crc kubenswrapper[4979]: I0130 22:05:06.671109 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerDied","Data":"03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e"} Jan 30 22:05:07 crc kubenswrapper[4979]: I0130 22:05:07.027293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:05:07 crc kubenswrapper[4979]: I0130 22:05:07.027387 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.026766 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.070639 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156151 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156270 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156355 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156462 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.162320 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm" (OuterVolumeSpecName: "kube-api-access-tk2lm") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "kube-api-access-tk2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.163470 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts" (OuterVolumeSpecName: "scripts") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.188740 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data" (OuterVolumeSpecName: "config-data") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.189978 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259014 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259425 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259445 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259458 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.692777 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerDied","Data":"8439fa81627ed0d7327a33566a06586c473b5bde902c6eee485f6d5ed225dc1e"} Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.693241 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8439fa81627ed0d7327a33566a06586c473b5bde902c6eee485f6d5ed225dc1e" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.692917 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.806370 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:05:08 crc kubenswrapper[4979]: E0130 22:05:08.806903 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerName="nova-cell1-conductor-db-sync" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.806927 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerName="nova-cell1-conductor-db-sync" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.807213 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerName="nova-cell1-conductor-db-sync" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.808093 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.818873 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.823619 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.871904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.871997 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.872107 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.974316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.974414 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.974449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.986090 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.986132 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.993335 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.168495 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.421358 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.642381 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:05:09 crc kubenswrapper[4979]: W0130 22:05:09.647193 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f627a1e_42e6_4af6_90f1_750c01bcf076.slice/crio-a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854 WatchSource:0}: Error finding container a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854: Status 404 returned error can't find the container with id a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854 Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.705452 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerStarted","Data":"a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854"} Jan 30 22:05:10 crc kubenswrapper[4979]: I0130 22:05:10.721600 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerStarted","Data":"d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918"} Jan 30 22:05:10 crc kubenswrapper[4979]: I0130 22:05:10.722213 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:10 crc kubenswrapper[4979]: I0130 22:05:10.748983 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.7489494309999998 podStartE2EDuration="2.748949431s" podCreationTimestamp="2026-01-30 22:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:10.743911566 +0000 UTC m=+1506.705158609" watchObservedRunningTime="2026-01-30 22:05:10.748949431 +0000 UTC m=+1506.710196464" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.028976 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.029414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.291224 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.291280 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.025960 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.032468 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.037914 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.215927 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.261469 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.261747 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" containerID="cri-o://20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df" gracePeriod=30 Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.373295 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.373299 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.810515 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.196538 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767756 4979 generic.go:334] "Generic (PLEG): container finished" podID="802f295d-d208-4750-ab9b-c3886cb30091" containerID="20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df" exitCode=2 Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerDied","Data":"20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df"} Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767872 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerDied","Data":"073da3757392885be51de106d5a842ae9944cc19e4dc0f6b4686c2786716c716"} Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767887 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073da3757392885be51de106d5a842ae9944cc19e4dc0f6b4686c2786716c716" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.795562 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.925588 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"802f295d-d208-4750-ab9b-c3886cb30091\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.944817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6" (OuterVolumeSpecName: "kube-api-access-ntbh6") pod "802f295d-d208-4750-ab9b-c3886cb30091" (UID: "802f295d-d208-4750-ab9b-c3886cb30091"). InnerVolumeSpecName "kube-api-access-ntbh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.027846 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580058 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580423 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" containerID="cri-o://4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580595 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" containerID="cri-o://a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580616 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" containerID="cri-o://745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.582198 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" containerID="cri-o://69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.782546 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" exitCode=0 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.782794 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" exitCode=2 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.783016 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.782624 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74"} Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.783818 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393"} Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.823229 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.836613 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.850456 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: E0130 22:05:15.851098 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.851122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.851364 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.852350 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.856545 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.856980 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.861956 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951422 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951578 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951626 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951727 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056076 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056161 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056240 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.071727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.077239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.097069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.110020 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.171204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.756168 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:16 crc kubenswrapper[4979]: W0130 22:05:16.771075 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5eba1b_535d_4519_97c5_5e8b8f003d96.slice/crio-e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561 WatchSource:0}: Error finding container e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561: Status 404 returned error can't find the container with id e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561 Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.775829 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.799652 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" exitCode=0 Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.799721 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428"} Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.804516 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerStarted","Data":"e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561"} Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.090763 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802f295d-d208-4750-ab9b-c3886cb30091" path="/var/lib/kubelet/pods/802f295d-d208-4750-ab9b-c3886cb30091/volumes" Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.818408 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerStarted","Data":"10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755"} Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.819008 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.846082 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.46188223 podStartE2EDuration="2.84605993s" podCreationTimestamp="2026-01-30 22:05:15 +0000 UTC" firstStartedPulling="2026-01-30 22:05:16.775364222 +0000 UTC m=+1512.736611295" lastFinishedPulling="2026-01-30 22:05:17.159541962 +0000 UTC m=+1513.120788995" observedRunningTime="2026-01-30 22:05:17.841363814 +0000 UTC m=+1513.802610917" watchObservedRunningTime="2026-01-30 22:05:17.84605993 +0000 UTC m=+1513.807306963" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.823359 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835255 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" exitCode=0 Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835372 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835433 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d"} Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835476 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"01c6464a8f040a12abc8ff599cbfa55d11072c8f6eee4cfc9c902ea1c0c52c3a"} Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835513 4979 scope.go:117] "RemoveContainer" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.875651 4979 scope.go:117] "RemoveContainer" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.916130 4979 scope.go:117] "RemoveContainer" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933544 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933581 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933633 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933659 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933680 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933762 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933859 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.936458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.936691 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.942451 4979 scope.go:117] "RemoveContainer" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.945298 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts" (OuterVolumeSpecName: "scripts") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.953957 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572" (OuterVolumeSpecName: "kube-api-access-vd572") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "kube-api-access-vd572". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.977659 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039752 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039808 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039823 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039837 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039850 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.051215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.075376 4979 scope.go:117] "RemoveContainer" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.075450 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data" (OuterVolumeSpecName: "config-data") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.076052 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74\": container with ID starting with a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74 not found: ID does not exist" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076098 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74"} err="failed to get container status \"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74\": rpc error: code = NotFound desc = could not find container \"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74\": container with ID starting with a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74 not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076129 4979 scope.go:117] "RemoveContainer" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.076560 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393\": container with ID starting with 745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393 not found: ID does not exist" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076645 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393"} err="failed to get container status \"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393\": rpc error: code = NotFound desc = could not find container \"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393\": container with ID starting with 745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393 not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076712 4979 scope.go:117] "RemoveContainer" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.077150 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d\": container with ID starting with 69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d not found: ID does not exist" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.077185 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d"} err="failed to get container status \"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d\": rpc error: code = NotFound desc = could not find container \"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d\": container with ID starting with 69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.077208 4979 scope.go:117] "RemoveContainer" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.084742 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428\": container with ID starting with 4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428 not found: ID does not exist" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.084798 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428"} err="failed to get container status \"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428\": rpc error: code = NotFound desc = could not find container \"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428\": container with ID starting with 4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428 not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.141913 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.141956 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.171600 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.190043 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.208140 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209111 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209259 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209379 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209457 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209533 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209610 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209688 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209756 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210070 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210179 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210277 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210369 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.212849 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.217461 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.217753 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.217951 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.221176 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346204 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346465 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346690 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.347182 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.347455 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454116 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454243 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454323 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.455191 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.461377 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.462479 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.465716 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.466900 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.475917 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.480919 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.546841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:20 crc kubenswrapper[4979]: I0130 22:05:20.073778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:20 crc kubenswrapper[4979]: W0130 22:05:20.077968 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5452d41_b901_4e6c_876c_06c0f44ba8ef.slice/crio-5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece WatchSource:0}: Error finding container 5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece: Status 404 returned error can't find the container with id 5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece Jan 30 22:05:20 crc kubenswrapper[4979]: I0130 22:05:20.877438 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e"} Jan 30 22:05:20 crc kubenswrapper[4979]: I0130 22:05:20.877768 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece"} Jan 30 22:05:21 crc kubenswrapper[4979]: I0130 22:05:21.081590 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" path="/var/lib/kubelet/pods/735d6952-ef80-442e-b87b-a32834aa4acb/volumes" Jan 30 22:05:21 crc kubenswrapper[4979]: I0130 22:05:21.890727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef"} Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.034519 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.037100 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.041220 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.295066 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.295826 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.296186 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.296210 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.300661 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.304279 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.611940 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.617799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.656759 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762716 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762871 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762901 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762960 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.864847 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865364 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865424 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.866951 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.867293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.866810 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865874 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.866637 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.867592 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.867962 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.908426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0"} Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.908721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.924168 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.996153 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.622887 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.921364 4979 generic.go:334] "Generic (PLEG): container finished" podID="4bae0355-ad11-48d3-a13f-378354677f77" containerID="cb53a0bf80799a9038c0ec96174830f51ef5adf97bb87b1dc554e2dbe52de608" exitCode=0 Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.921437 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerDied","Data":"cb53a0bf80799a9038c0ec96174830f51ef5adf97bb87b1dc554e2dbe52de608"} Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.921765 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerStarted","Data":"fcd7f766ab345ea2e8c0ac6bd8fb4c89c2192ee2d80ef64d952c822915831fd5"} Jan 30 22:05:24 crc kubenswrapper[4979]: I0130 22:05:24.985610 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7"} Jan 30 22:05:24 crc kubenswrapper[4979]: I0130 22:05:24.986296 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:05:24 crc kubenswrapper[4979]: I0130 22:05:24.994323 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerStarted","Data":"68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312"} Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.020646 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.623725495 podStartE2EDuration="6.020614815s" podCreationTimestamp="2026-01-30 22:05:19 +0000 UTC" firstStartedPulling="2026-01-30 22:05:20.086694156 +0000 UTC m=+1516.047941189" lastFinishedPulling="2026-01-30 22:05:24.483583446 +0000 UTC m=+1520.444830509" observedRunningTime="2026-01-30 22:05:25.010937446 +0000 UTC m=+1520.972184479" watchObservedRunningTime="2026-01-30 22:05:25.020614815 +0000 UTC m=+1520.981861848" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.036880 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" podStartSLOduration=3.03686073 podStartE2EDuration="3.03686073s" podCreationTimestamp="2026-01-30 22:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:25.030921521 +0000 UTC m=+1520.992168554" watchObservedRunningTime="2026-01-30 22:05:25.03686073 +0000 UTC m=+1520.998107763" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.603984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.606626 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.630937 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.772515 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.772671 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.772710 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.842826 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.843124 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" containerID="cri-o://57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380" gracePeriod=30 Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.844342 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" containerID="cri-o://356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835" gracePeriod=30 Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.874883 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.875083 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.875125 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.875943 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.883403 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.919121 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.935046 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:26 crc kubenswrapper[4979]: I0130 22:05:26.024740 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:26 crc kubenswrapper[4979]: I0130 22:05:26.222715 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 22:05:26 crc kubenswrapper[4979]: I0130 22:05:26.625384 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:26 crc kubenswrapper[4979]: W0130 22:05:26.649389 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f682a99_2265_4234_a19c_01f62262e96b.slice/crio-883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb WatchSource:0}: Error finding container 883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb: Status 404 returned error can't find the container with id 883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.036600 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f682a99-2265-4234-a19c-01f62262e96b" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" exitCode=0 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.036666 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.037159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerStarted","Data":"883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.043690 4979 generic.go:334] "Generic (PLEG): container finished" podID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerID="4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a" exitCode=137 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.043833 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerDied","Data":"4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.044146 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerDied","Data":"418ba1031b4d4e3f1080f6d157787ad41890fff919486dd4b096f4ae99738787"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.044164 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418ba1031b4d4e3f1080f6d157787ad41890fff919486dd4b096f4ae99738787" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.047980 4979 generic.go:334] "Generic (PLEG): container finished" podID="d48f1e79-1816-4321-ba02-25d28d095a47" containerID="57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380" exitCode=143 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.048081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerDied","Data":"57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.091325 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.223323 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.223409 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.223749 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.268182 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt" (OuterVolumeSpecName: "kube-api-access-m2grt") pod "50eca4bc-cd69-4cce-a995-ac34fbcd5edd" (UID: "50eca4bc-cd69-4cce-a995-ac34fbcd5edd"). InnerVolumeSpecName "kube-api-access-m2grt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.272121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data" (OuterVolumeSpecName: "config-data") pod "50eca4bc-cd69-4cce-a995-ac34fbcd5edd" (UID: "50eca4bc-cd69-4cce-a995-ac34fbcd5edd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.298721 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50eca4bc-cd69-4cce-a995-ac34fbcd5edd" (UID: "50eca4bc-cd69-4cce-a995-ac34fbcd5edd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315105 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315442 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" containerID="cri-o://1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315578 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" containerID="cri-o://9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315647 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" containerID="cri-o://1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.316927 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" containerID="cri-o://010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.327306 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.327356 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.327372 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.076504 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" exitCode=0 Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077064 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" exitCode=2 Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077075 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" exitCode=0 Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077159 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7"} Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077762 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0"} Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077772 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef"} Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.133195 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.150018 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.171862 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: E0130 22:05:28.204705 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.204763 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.204999 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.205770 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.205878 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.210049 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.212416 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.212463 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351410 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351499 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351921 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454678 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454925 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.455101 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.463141 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.464367 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.465789 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.476052 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.484671 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.531516 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:29 crc kubenswrapper[4979]: I0130 22:05:29.084594 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" path="/var/lib/kubelet/pods/50eca4bc-cd69-4cce-a995-ac34fbcd5edd/volumes" Jan 30 22:05:29 crc kubenswrapper[4979]: I0130 22:05:29.086416 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:29 crc kubenswrapper[4979]: W0130 22:05:29.087331 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95748319_965e_49d8_8a00_c0bc1025337d.slice/crio-e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0 WatchSource:0}: Error finding container e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0: Status 404 returned error can't find the container with id e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0 Jan 30 22:05:29 crc kubenswrapper[4979]: I0130 22:05:29.095971 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerStarted","Data":"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.175401 4979 generic.go:334] "Generic (PLEG): container finished" podID="d48f1e79-1816-4321-ba02-25d28d095a47" containerID="356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835" exitCode=0 Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.176412 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerDied","Data":"356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.183348 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerStarted","Data":"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.183390 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerStarted","Data":"e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.215544 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.215508501 podStartE2EDuration="2.215508501s" podCreationTimestamp="2026-01-30 22:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:30.208420821 +0000 UTC m=+1526.169667864" watchObservedRunningTime="2026-01-30 22:05:30.215508501 +0000 UTC m=+1526.176755534" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.454481 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504789 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504923 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504987 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.512839 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs" (OuterVolumeSpecName: "logs") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.517632 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr" (OuterVolumeSpecName: "kube-api-access-vxzgr") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "kube-api-access-vxzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.552462 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data" (OuterVolumeSpecName: "config-data") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.562103 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607063 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607116 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607130 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607140 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.170633 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.194484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerDied","Data":"28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.194555 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.194592 4979 scope.go:117] "RemoveContainer" containerID="356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.197019 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f682a99-2265-4234-a19c-01f62262e96b" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" exitCode=0 Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.197089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.214890 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" exitCode=0 Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.216254 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.216389 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.216422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224531 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224660 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224795 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224904 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224978 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.225175 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.226491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.227459 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.237286 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts" (OuterVolumeSpecName: "scripts") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.244168 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx" (OuterVolumeSpecName: "kube-api-access-r7zvx") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "kube-api-access-r7zvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.248336 4979 scope.go:117] "RemoveContainer" containerID="57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.314718 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.318314 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.324732 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333118 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333174 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333192 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333205 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333218 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333228 4979 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.348146 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.370274 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371103 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371134 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371158 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371168 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371186 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371195 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371210 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371221 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371245 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371255 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.373530 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.373575 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.373967 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.373999 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374014 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374059 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374079 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374095 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.379726 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.383331 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.383675 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.384728 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.397895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.424988 4979 scope.go:117] "RemoveContainer" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.431122 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.437848 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.437973 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438149 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438246 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438339 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438366 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438474 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.456009 4979 scope.go:117] "RemoveContainer" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.457393 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data" (OuterVolumeSpecName: "config-data") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.501337 4979 scope.go:117] "RemoveContainer" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.526780 4979 scope.go:117] "RemoveContainer" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.544800 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.544961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545237 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545259 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545497 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.553295 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.554854 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.555142 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.557269 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.558264 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.562281 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.567267 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.588157 4979 scope.go:117] "RemoveContainer" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.588856 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7\": container with ID starting with 9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7 not found: ID does not exist" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.588892 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7"} err="failed to get container status \"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7\": rpc error: code = NotFound desc = could not find container \"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7\": container with ID starting with 9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7 not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.588917 4979 scope.go:117] "RemoveContainer" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.589197 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0\": container with ID starting with 010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0 not found: ID does not exist" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.589225 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0"} err="failed to get container status \"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0\": rpc error: code = NotFound desc = could not find container \"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0\": container with ID starting with 010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0 not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.589239 4979 scope.go:117] "RemoveContainer" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.591681 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef\": container with ID starting with 1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef not found: ID does not exist" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.591716 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef"} err="failed to get container status \"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef\": rpc error: code = NotFound desc = could not find container \"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef\": container with ID starting with 1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.591733 4979 scope.go:117] "RemoveContainer" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.592623 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e\": container with ID starting with 1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e not found: ID does not exist" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.592655 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e"} err="failed to get container status \"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e\": rpc error: code = NotFound desc = could not find container \"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e\": container with ID starting with 1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.603209 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.606632 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.612700 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.618568 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.618884 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.622619 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651393 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651480 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652188 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652234 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652259 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652414 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.746490 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754657 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754711 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754738 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754813 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754863 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754901 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.755021 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.756673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.756740 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.763259 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.763888 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.764133 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.766157 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.769723 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.785392 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.946283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.040393 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.040766 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.234524 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerStarted","Data":"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8"} Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.287895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.314657 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bsf45" podStartSLOduration=2.614654167 podStartE2EDuration="7.314612353s" podCreationTimestamp="2026-01-30 22:05:25 +0000 UTC" firstStartedPulling="2026-01-30 22:05:27.043057571 +0000 UTC m=+1523.004304604" lastFinishedPulling="2026-01-30 22:05:31.743015737 +0000 UTC m=+1527.704262790" observedRunningTime="2026-01-30 22:05:32.254406508 +0000 UTC m=+1528.215653541" watchObservedRunningTime="2026-01-30 22:05:32.314612353 +0000 UTC m=+1528.275859386" Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.474789 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:32 crc kubenswrapper[4979]: W0130 22:05:32.475208 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b34adef_df84_42dd_a052_5e543c4182b5.slice/crio-1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e WatchSource:0}: Error finding container 1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e: Status 404 returned error can't find the container with id 1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.998262 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.080025 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.080400 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" containerID="cri-o://b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457" gracePeriod=10 Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.125885 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" path="/var/lib/kubelet/pods/d48f1e79-1816-4321-ba02-25d28d095a47/volumes" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.127174 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" path="/var/lib/kubelet/pods/d5452d41-b901-4e6c-876c-06c0f44ba8ef/volumes" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.304055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerStarted","Data":"75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.304570 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerStarted","Data":"92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.304586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerStarted","Data":"94fccc846accac2626b4330c74f1995d347342c1b98a558385ef9d93cbd0d6e8"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.314430 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.314496 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.320607 4979 generic.go:334] "Generic (PLEG): container finished" podID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerID="b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457" exitCode=0 Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.320670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerDied","Data":"b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.332741 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.33271733 podStartE2EDuration="2.33271733s" podCreationTimestamp="2026-01-30 22:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:33.331979001 +0000 UTC m=+1529.293226034" watchObservedRunningTime="2026-01-30 22:05:33.33271733 +0000 UTC m=+1529.293964363" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.531860 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.826089 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.916772 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.916853 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.916894 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.917101 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.917143 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.917185 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.945900 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2" (OuterVolumeSpecName: "kube-api-access-xswj2") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "kube-api-access-xswj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.021617 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.043305 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.046604 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.056741 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.059871 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config" (OuterVolumeSpecName: "config") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.063673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123630 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123823 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123844 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123858 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123874 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.333625 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.333618 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerDied","Data":"dad5ecae947304a11e938cd18a6af2bcf48628237b04604b4febaa6b29c4e97a"} Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.334178 4979 scope.go:117] "RemoveContainer" containerID="b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.338869 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c"} Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.358347 4979 scope.go:117] "RemoveContainer" containerID="013d174f6848cb2abad2b004411d67e5b0bf2bc2e07bdd6263bb0777501bbd65" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.376310 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.403722 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:05:35 crc kubenswrapper[4979]: I0130 22:05:35.083457 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" path="/var/lib/kubelet/pods/c5be09bc-3cf9-443f-bfc7-904e8ed874f8/volumes" Jan 30 22:05:35 crc kubenswrapper[4979]: I0130 22:05:35.936731 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:35 crc kubenswrapper[4979]: I0130 22:05:35.937250 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:36 crc kubenswrapper[4979]: I0130 22:05:36.377997 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511"} Jan 30 22:05:36 crc kubenswrapper[4979]: I0130 22:05:36.994505 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bsf45" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" probeResult="failure" output=< Jan 30 22:05:36 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 22:05:36 crc kubenswrapper[4979]: > Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.400809 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed"} Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.401883 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.433751 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.920565781 podStartE2EDuration="7.43371792s" podCreationTimestamp="2026-01-30 22:05:31 +0000 UTC" firstStartedPulling="2026-01-30 22:05:32.47896983 +0000 UTC m=+1528.440216863" lastFinishedPulling="2026-01-30 22:05:37.992121969 +0000 UTC m=+1533.953369002" observedRunningTime="2026-01-30 22:05:38.422263162 +0000 UTC m=+1534.383510195" watchObservedRunningTime="2026-01-30 22:05:38.43371792 +0000 UTC m=+1534.394964953" Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.531865 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.555878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.445990 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.652369 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:05:39 crc kubenswrapper[4979]: E0130 22:05:39.653102 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="init" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.653124 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="init" Jan 30 22:05:39 crc kubenswrapper[4979]: E0130 22:05:39.653156 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.653165 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.653452 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.660649 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.665709 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.668260 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.669600 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758560 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758695 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758816 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758928 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861282 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861495 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.874924 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.874998 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.875690 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.885827 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.995218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:40 crc kubenswrapper[4979]: I0130 22:05:40.535670 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:05:40 crc kubenswrapper[4979]: W0130 22:05:40.541577 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707c6502_cbf2_4d94_b032_6d6eeebb581e.slice/crio-f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24 WatchSource:0}: Error finding container f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24: Status 404 returned error can't find the container with id f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24 Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.441244 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerStarted","Data":"273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839"} Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.441707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerStarted","Data":"f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24"} Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.469184 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2zdqm" podStartSLOduration=2.469151196 podStartE2EDuration="2.469151196s" podCreationTimestamp="2026-01-30 22:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:41.468145629 +0000 UTC m=+1537.429392682" watchObservedRunningTime="2026-01-30 22:05:41.469151196 +0000 UTC m=+1537.430398229" Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.750368 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.750418 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:42 crc kubenswrapper[4979]: I0130 22:05:42.780357 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:42 crc kubenswrapper[4979]: I0130 22:05:42.780366 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.014202 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.096845 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.259599 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.497300 4979 generic.go:334] "Generic (PLEG): container finished" podID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerID="273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839" exitCode=0 Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.497585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerDied","Data":"273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839"} Jan 30 22:05:47 crc kubenswrapper[4979]: I0130 22:05:47.521590 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bsf45" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" containerID="cri-o://ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" gracePeriod=2 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.022892 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.123733 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.123799 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.123925 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.124269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.131555 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq" (OuterVolumeSpecName: "kube-api-access-tdlvq") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "kube-api-access-tdlvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.134728 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts" (OuterVolumeSpecName: "scripts") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.154262 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.157291 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data" (OuterVolumeSpecName: "config-data") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.161739 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.227042 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"9f682a99-2265-4234-a19c-01f62262e96b\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.227181 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"9f682a99-2265-4234-a19c-01f62262e96b\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.227352 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"9f682a99-2265-4234-a19c-01f62262e96b\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228341 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities" (OuterVolumeSpecName: "utilities") pod "9f682a99-2265-4234-a19c-01f62262e96b" (UID: "9f682a99-2265-4234-a19c-01f62262e96b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228906 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228932 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228943 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228956 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228965 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.231171 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj" (OuterVolumeSpecName: "kube-api-access-8s4vj") pod "9f682a99-2265-4234-a19c-01f62262e96b" (UID: "9f682a99-2265-4234-a19c-01f62262e96b"). InnerVolumeSpecName "kube-api-access-8s4vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.332195 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.360725 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f682a99-2265-4234-a19c-01f62262e96b" (UID: "9f682a99-2265-4234-a19c-01f62262e96b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.436218 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543530 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f682a99-2265-4234-a19c-01f62262e96b" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" exitCode=0 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543628 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8"} Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb"} Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543685 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543698 4979 scope.go:117] "RemoveContainer" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.556274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerDied","Data":"f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24"} Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.556328 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.556432 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.611292 4979 scope.go:117] "RemoveContainer" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.628111 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.642341 4979 scope.go:117] "RemoveContainer" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.649528 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.718425 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.718811 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" containerID="cri-o://75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.719008 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" containerID="cri-o://92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.732621 4979 scope.go:117] "RemoveContainer" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" Jan 30 22:05:48 crc kubenswrapper[4979]: E0130 22:05:48.733213 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8\": container with ID starting with ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8 not found: ID does not exist" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733257 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8"} err="failed to get container status \"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8\": rpc error: code = NotFound desc = could not find container \"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8\": container with ID starting with ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8 not found: ID does not exist" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733286 4979 scope.go:117] "RemoveContainer" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" Jan 30 22:05:48 crc kubenswrapper[4979]: E0130 22:05:48.733506 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df\": container with ID starting with 3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df not found: ID does not exist" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733532 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df"} err="failed to get container status \"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df\": rpc error: code = NotFound desc = could not find container \"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df\": container with ID starting with 3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df not found: ID does not exist" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733549 4979 scope.go:117] "RemoveContainer" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" Jan 30 22:05:48 crc kubenswrapper[4979]: E0130 22:05:48.733776 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d\": container with ID starting with f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d not found: ID does not exist" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733796 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d"} err="failed to get container status \"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d\": rpc error: code = NotFound desc = could not find container \"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d\": container with ID starting with f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d not found: ID does not exist" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.762221 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.762536 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" containerID="cri-o://fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.779185 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.779495 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" containerID="cri-o://bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.779670 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" containerID="cri-o://5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" gracePeriod=30 Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.081078 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f682a99-2265-4234-a19c-01f62262e96b" path="/var/lib/kubelet/pods/9f682a99-2265-4234-a19c-01f62262e96b/volumes" Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.571277 4979 generic.go:334] "Generic (PLEG): container finished" podID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" exitCode=143 Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.571396 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerDied","Data":"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e"} Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.575882 4979 generic.go:334] "Generic (PLEG): container finished" podID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerID="92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310" exitCode=143 Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.575936 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerDied","Data":"92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.027709 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": dial tcp 10.217.0.195:8775: connect: connection refused" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.027785 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": dial tcp 10.217.0.195:8775: connect: connection refused" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.485482 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534480 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534559 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534811 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534867 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.535783 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs" (OuterVolumeSpecName: "logs") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.537671 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.555121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb" (OuterVolumeSpecName: "kube-api-access-zc9pb") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "kube-api-access-zc9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.576713 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.612465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.613590 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data" (OuterVolumeSpecName: "config-data") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.622331 4979 generic.go:334] "Generic (PLEG): container finished" podID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerID="75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8" exitCode=0 Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.622414 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerDied","Data":"75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624092 4979 generic.go:334] "Generic (PLEG): container finished" podID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" exitCode=0 Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624172 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624200 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerDied","Data":"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624283 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerDied","Data":"cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624320 4979 scope.go:117] "RemoveContainer" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625613 4979 generic.go:334] "Generic (PLEG): container finished" podID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" exitCode=0 Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625654 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerDied","Data":"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625682 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerDied","Data":"7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625716 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.632104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.641509 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"4df90142-0487-4f26-8fb8-4ea21cda53d5\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.641794 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"4df90142-0487-4f26-8fb8-4ea21cda53d5\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.641848 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"4df90142-0487-4f26-8fb8-4ea21cda53d5\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642648 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642673 4979 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642691 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642706 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.661494 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2" (OuterVolumeSpecName: "kube-api-access-6mcf2") pod "4df90142-0487-4f26-8fb8-4ea21cda53d5" (UID: "4df90142-0487-4f26-8fb8-4ea21cda53d5"). InnerVolumeSpecName "kube-api-access-6mcf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.680305 4979 scope.go:117] "RemoveContainer" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.688993 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data" (OuterVolumeSpecName: "config-data") pod "4df90142-0487-4f26-8fb8-4ea21cda53d5" (UID: "4df90142-0487-4f26-8fb8-4ea21cda53d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.691224 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4df90142-0487-4f26-8fb8-4ea21cda53d5" (UID: "4df90142-0487-4f26-8fb8-4ea21cda53d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.716268 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.728407 4979 scope.go:117] "RemoveContainer" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" Jan 30 22:05:52 crc kubenswrapper[4979]: E0130 22:05:52.729246 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c\": container with ID starting with 5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c not found: ID does not exist" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.729487 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c"} err="failed to get container status \"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c\": rpc error: code = NotFound desc = could not find container \"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c\": container with ID starting with 5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c not found: ID does not exist" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.729620 4979 scope.go:117] "RemoveContainer" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" Jan 30 22:05:52 crc kubenswrapper[4979]: E0130 22:05:52.730532 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e\": container with ID starting with bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e not found: ID does not exist" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.730690 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e"} err="failed to get container status \"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e\": rpc error: code = NotFound desc = could not find container \"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e\": container with ID starting with bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e not found: ID does not exist" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.730799 4979 scope.go:117] "RemoveContainer" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.751551 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.751586 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.751598 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.765820 4979 scope.go:117] "RemoveContainer" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" Jan 30 22:05:52 crc kubenswrapper[4979]: E0130 22:05:52.767075 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed\": container with ID starting with fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed not found: ID does not exist" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.767141 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed"} err="failed to get container status \"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed\": rpc error: code = NotFound desc = could not find container \"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed\": container with ID starting with fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed not found: ID does not exist" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852490 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852631 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852686 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852755 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852951 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.853095 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.853560 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs" (OuterVolumeSpecName: "logs") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.853677 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.857061 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v" (OuterVolumeSpecName: "kube-api-access-7vt8v") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "kube-api-access-7vt8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.885868 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data" (OuterVolumeSpecName: "config-data") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.899576 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.917751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.917893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955548 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955578 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955592 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955602 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955611 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.976138 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.006162 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021024 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021746 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021775 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021797 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021809 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021825 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021833 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021846 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021854 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021880 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-content" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021888 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-content" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021900 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021910 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021947 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerName="nova-manage" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021955 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerName="nova-manage" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021968 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021976 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021992 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-utilities" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-utilities" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022286 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022307 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022318 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022335 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022355 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerName="nova-manage" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022368 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022403 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.023467 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.033582 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.034824 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.060409 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.084097 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" path="/var/lib/kubelet/pods/4df90142-0487-4f26-8fb8-4ea21cda53d5/volumes" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.084826 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" path="/var/lib/kubelet/pods/b45ea9a1-6c1f-4719-8432-2add7fdef96d/volumes" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.085890 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.085929 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.088253 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.093191 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.093443 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.095153 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.170920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.171611 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.171892 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174222 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174401 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174476 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174728 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174804 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.277458 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.277531 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.278108 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279395 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279737 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279803 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.280605 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.285639 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.285836 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.286892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.286903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.287957 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.297325 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.300416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.464567 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.486994 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.643136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerDied","Data":"94fccc846accac2626b4330c74f1995d347342c1b98a558385ef9d93cbd0d6e8"} Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.643202 4979 scope.go:117] "RemoveContainer" containerID="75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.643222 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.674924 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.703206 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.712081 4979 scope.go:117] "RemoveContainer" containerID="92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.715988 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.717854 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.720718 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.720956 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.721051 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.735065 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.796964 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797423 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797626 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797809 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.798142 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900160 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900241 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900288 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.901284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.901331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.901837 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.905623 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.906754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.906992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.907321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.920307 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.998727 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:54 crc kubenswrapper[4979]: W0130 22:05:54.001481 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69eed38_4641_4703_8a87_93aedebfbff1.slice/crio-44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8 WatchSource:0}: Error finding container 44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8: Status 404 returned error can't find the container with id 44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8 Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.045644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:54 crc kubenswrapper[4979]: W0130 22:05:54.102433 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44df4390_d39d_42b7_904c_99d3e9680768.slice/crio-310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff WatchSource:0}: Error finding container 310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff: Status 404 returned error can't find the container with id 310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.104430 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:54 crc kubenswrapper[4979]: W0130 22:05:54.558913 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae89cf4_f9f4_456b_947f_be87514b79ff.slice/crio-d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900 WatchSource:0}: Error finding container d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900: Status 404 returned error can't find the container with id d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900 Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.562681 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.671478 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerStarted","Data":"99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.671545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerStarted","Data":"2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.671557 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerStarted","Data":"310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.674454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerStarted","Data":"e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.674510 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerStarted","Data":"44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.676005 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerStarted","Data":"d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.707457 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.707429953 podStartE2EDuration="2.707429953s" podCreationTimestamp="2026-01-30 22:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:54.691715322 +0000 UTC m=+1550.652962375" watchObservedRunningTime="2026-01-30 22:05:54.707429953 +0000 UTC m=+1550.668676996" Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.725803 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.725778186 podStartE2EDuration="2.725778186s" podCreationTimestamp="2026-01-30 22:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:54.71589612 +0000 UTC m=+1550.677143153" watchObservedRunningTime="2026-01-30 22:05:54.725778186 +0000 UTC m=+1550.687025219" Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.088955 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" path="/var/lib/kubelet/pods/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b/volumes" Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.724294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerStarted","Data":"5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504"} Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.724397 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerStarted","Data":"748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96"} Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.757319 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.757289552 podStartE2EDuration="2.757289552s" podCreationTimestamp="2026-01-30 22:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:55.752430502 +0000 UTC m=+1551.713677555" watchObservedRunningTime="2026-01-30 22:05:55.757289552 +0000 UTC m=+1551.718536585" Jan 30 22:05:58 crc kubenswrapper[4979]: I0130 22:05:58.464912 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 22:05:58 crc kubenswrapper[4979]: I0130 22:05:58.487994 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:05:58 crc kubenswrapper[4979]: I0130 22:05:58.488122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:06:01 crc kubenswrapper[4979]: I0130 22:06:01.960411 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.039458 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.039613 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.039710 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.040803 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.040872 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" gracePeriod=600 Jan 30 22:06:02 crc kubenswrapper[4979]: E0130 22:06:02.168489 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.815952 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" exitCode=0 Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.816014 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c"} Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.816443 4979 scope.go:117] "RemoveContainer" containerID="9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.817416 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:02 crc kubenswrapper[4979]: E0130 22:06:02.817815 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.465155 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.488259 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.488326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.503257 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.861820 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.046389 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.046459 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.511334 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.511353 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:05 crc kubenswrapper[4979]: I0130 22:06:05.065276 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:05 crc kubenswrapper[4979]: I0130 22:06:05.065408 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.070291 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:13 crc kubenswrapper[4979]: E0130 22:06:13.071602 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.494360 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.494462 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.500944 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.502338 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.058229 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.059016 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.059938 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.069448 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.941445 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.953130 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:06:26 crc kubenswrapper[4979]: I0130 22:06:26.070717 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:26 crc kubenswrapper[4979]: E0130 22:06:26.072203 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.395796 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.398501 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.403143 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.467417 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.467501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.518588 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.545454 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.545824 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" containerID="cri-o://6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c" gracePeriod=2 Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.557950 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.570105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.570194 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.571902 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.578109 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.590324 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.660228 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.741182 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.777090 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:31 crc kubenswrapper[4979]: E0130 22:06:31.777695 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.777713 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.777973 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.793236 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.806743 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.843249 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.884172 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.885776 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.889942 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.923134 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.940754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.946206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:31 crc kubenswrapper[4979]: E0130 22:06:31.947987 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:31 crc kubenswrapper[4979]: E0130 22:06:31.948166 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:32.448140926 +0000 UTC m=+1588.409387959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.981102 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049872 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049963 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.051076 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.099218 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.101134 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.108167 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.133373 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.139215 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.152329 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.154192 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157568 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.157912 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.157987 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:32.657963162 +0000 UTC m=+1588.619210195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.159096 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.159932 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.159972 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:32.659962235 +0000 UTC m=+1588.621209268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.163943 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.164009 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.172094 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.176086 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.186138 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.194764 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.224430 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.263971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.286884 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.287157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.287243 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.288825 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.362233 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.405244 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.426966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.427143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.429326 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.508319 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.526260 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.580189 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.580284 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.580256166 +0000 UTC m=+1589.541503189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.672654 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.717863 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.718469 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.718446293 +0000 UTC m=+1589.679693326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.718913 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.718940 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.718932797 +0000 UTC m=+1589.680179830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.720019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.812416 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.863581 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.924968 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.991325 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.993067 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.012127 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.037986 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.038070 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.538048124 +0000 UTC m=+1589.499295157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.039711 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.141931 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.142565 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.222168 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" path="/var/lib/kubelet/pods/206c6cff-9f21-42be-b4d9-ebab3cb4ead8/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.251280 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" path="/var/lib/kubelet/pods/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.252253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.252353 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.255925 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.256458 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.256545 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.756521013 +0000 UTC m=+1589.717768046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.280389 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" path="/var/lib/kubelet/pods/bc3a0116-2f4a-4dde-bf99-56759f4349bc/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.284125 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" path="/var/lib/kubelet/pods/e0187b79-63c8-4f13-af19-892e8c9b36f9/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.286293 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.286331 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.311564 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.311624 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.311834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.314371 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" containerID="cri-o://9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844" gracePeriod=300 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.322989 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.363722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.364284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367651 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367696 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367710 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367831 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.375902 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.377355 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" containerID="cri-o://ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402" gracePeriod=300 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.396213 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.423107 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.451176 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.453132 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.468880 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.469010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.469054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.469197 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.470076 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.539417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.587442 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.609631 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.610119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.611886 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613084 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613175 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.613150889 +0000 UTC m=+1590.574397922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613456 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613480 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.613472288 +0000 UTC m=+1591.574719321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.624923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.646505 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.646601 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.146580138 +0000 UTC m=+1590.107827171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.685325 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.716362 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.716444 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.216427128 +0000 UTC m=+1590.177674151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.762322 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.763086 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" containerID="cri-o://80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1" gracePeriod=30 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.763213 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" containerID="cri-o://e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" gracePeriod=30 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.791159 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.813438 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.820508 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.820604 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.82058296 +0000 UTC m=+1591.781829993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.821087 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.821116 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.821106155 +0000 UTC m=+1591.782353188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.824963 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.825208 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.825180455 +0000 UTC m=+1590.786427488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.844267 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.860387 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.936098 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.964569 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" containerID="cri-o://364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e" gracePeriod=300 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.073299 4979 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/neutron-ccc5789d5-9fbcz" secret="" err="secret \"neutron-neutron-dockercfg-cgj89\" not found" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.150654 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.157945 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.168143 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.168573 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.168548764 +0000 UTC m=+1591.129795797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.194962 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" containerID="cri-o://e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d" gracePeriod=300 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.260995 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.262885 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.262962 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.762943464 +0000 UTC m=+1590.724190497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265367 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265453 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.265431291 +0000 UTC m=+1591.226678324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265513 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265541 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.765530863 +0000 UTC m=+1590.726777896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.275706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerStarted","Data":"b83c4ed8bbda19ed5aa54ca0fc84bb29d05f7a78681b54738255e43bd19127ba"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.281438 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282169 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" containerID="cri-o://9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282673 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" containerID="cri-o://453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282734 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" containerID="cri-o://91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282773 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" containerID="cri-o://7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282816 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" containerID="cri-o://a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282884 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" containerID="cri-o://20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282965 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" containerID="cri-o://1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282915 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" containerID="cri-o://b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283068 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" containerID="cri-o://42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283124 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" containerID="cri-o://1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283386 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" containerID="cri-o://34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283457 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" containerID="cri-o://c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283513 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" containerID="cri-o://7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283592 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" containerID="cri-o://fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283669 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" containerID="cri-o://77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.298552 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e16537b0-b66e-4bad-a481-9d2755cf6eb5/ovsdbserver-sb/0.log" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.298634 4979 generic.go:334] "Generic (PLEG): container finished" podID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerID="ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402" exitCode=2 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.298662 4979 generic.go:334] "Generic (PLEG): container finished" podID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerID="364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e" exitCode=143 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.299897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerDied","Data":"ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.299984 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerDied","Data":"364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.340644 4979 generic.go:334] "Generic (PLEG): container finished" podID="82508003-60c8-463b-92a9-bc9521fcfa03" containerID="6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c" exitCode=137 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.341085 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.371134 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.381814 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.382722 4979 generic.go:334] "Generic (PLEG): container finished" podID="e8a49e0c-0043-4326-b478-981d19e6480b" containerID="9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844" exitCode=2 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.382784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerDied","Data":"9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.402482 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.425239 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.454518 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:34 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: if [ -n "cinder" ]; then Jan 30 22:06:34 crc kubenswrapper[4979]: GRANT_DATABASE="cinder" Jan 30 22:06:34 crc kubenswrapper[4979]: else Jan 30 22:06:34 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:34 crc kubenswrapper[4979]: fi Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:34 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:34 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:34 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:34 crc kubenswrapper[4979]: # support updates Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.456797 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-18a2-account-create-update-tgfqm" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.460250 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.523633 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.592952 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.608261 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.631994 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.669484 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.681270 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.681607 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-lz8zj" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" containerID="cri-o://bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.703864 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.703951 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:36.70392893 +0000 UTC m=+1592.665175953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.704897 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.705177 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" containerID="cri-o://68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312" gracePeriod=10 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.728606 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.740248 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.785928 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.805941 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.806022 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.806004557 +0000 UTC m=+1591.767251590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.812195 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.823339 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.823291012 +0000 UTC m=+1591.784538035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.897506 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode16537b0_b66e_4bad_a481_9d2755cf6eb5.slice/crio-conmon-364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82508003_60c8_463b_92a9_bc9521fcfa03.slice/crio-conmon-6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a49e0c_0043_4326_b478_981d19e6480b.slice/crio-conmon-e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7cc7cf6_3592_4e25_9578_27ae56d6909b.slice/crio-conmon-80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.906126 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.908571 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.908643 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:36.908624309 +0000 UTC m=+1592.869871342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:34.995201 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.066117 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.238085 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" path="/var/lib/kubelet/pods/023efd8e-7f0d-4ac5-80b3-db30dbb25905/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.248441 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.274448 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.274624 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.275239 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.275204172 +0000 UTC m=+1593.236451205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.249005 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e523da-837e-4af0-835b-55b1950fc487" path="/var/lib/kubelet/pods/15e523da-837e-4af0-835b-55b1950fc487/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.284408 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" containerID="cri-o://32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2" gracePeriod=604800 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.375879 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" path="/var/lib/kubelet/pods/29c6531f-d97f-4f39-95bd-4c2b8a75779f/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.376440 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.379458 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.379600 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.379725 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.382459 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.382605 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.382565751 +0000 UTC m=+1593.343812774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.392620 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" path="/var/lib/kubelet/pods/707c6502-cbf2-4d94-b032-6d6eeebb581e/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.397452 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" path="/var/lib/kubelet/pods/79723cfd-4e3c-446c-bdf1-5c2c997950a8/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.399307 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp" (OuterVolumeSpecName: "kube-api-access-t85tp") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "kube-api-access-t85tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.399562 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" path="/var/lib/kubelet/pods/80aa258c-fc1b-4379-8b50-ac89cb9b4568/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.403327 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" path="/var/lib/kubelet/pods/8481722d-b63c-4f8e-82e2-0960d719b46b/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.408636 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" path="/var/lib/kubelet/pods/9c59f1f7-caf7-4ab4-b405-dbf27330ff37/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.422455 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" path="/var/lib/kubelet/pods/abec2c46-a984-4314-88c5-d50d20ef7f8d/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.426253 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" path="/var/lib/kubelet/pods/adb76b95-4c2d-478d-b9d9-e6e182859ccd/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.428610 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" path="/var/lib/kubelet/pods/bd648327-e40d-4f17-9366-1773fa95f47a/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.435206 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" path="/var/lib/kubelet/pods/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.437300 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:35 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: if [ -n "cinder" ]; then Jan 30 22:06:35 crc kubenswrapper[4979]: GRANT_DATABASE="cinder" Jan 30 22:06:35 crc kubenswrapper[4979]: else Jan 30 22:06:35 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:35 crc kubenswrapper[4979]: fi Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:35 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:35 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:35 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:35 crc kubenswrapper[4979]: # support updates Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.438497 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-18a2-account-create-update-tgfqm" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.442129 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.447933 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-tgfqm" event={"ID":"d4fc1eef-47e7-4fdd-9642-da7ce95056e8","Type":"ContainerStarted","Data":"d9676cb7e0eb5ddecab92aeb166656644b6133c3cd8ff91f6626cb611a3b2256"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448049 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448076 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448093 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448651 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448677 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448695 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449136 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5574d874bd-cg256" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" containerID="cri-o://4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449413 4979 scope.go:117] "RemoveContainer" containerID="6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449742 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-ccc5789d5-9fbcz" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" containerID="cri-o://94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449960 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" containerID="cri-o://2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.450057 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-ccc5789d5-9fbcz" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" containerID="cri-o://cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.450135 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5574d874bd-cg256" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" containerID="cri-o://db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.450447 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" containerID="cri-o://aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.485970 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lz8zj_817d8847-f022-4837-834f-a0e4b124f7ea/openstack-network-exporter/0.log" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.486112 4979 generic.go:334] "Generic (PLEG): container finished" podID="817d8847-f022-4837-834f-a0e4b124f7ea" containerID="bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff" exitCode=2 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.487455 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerDied","Data":"bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.490386 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.513296 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.514094 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" containerID="cri-o://3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.514815 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" containerID="cri-o://998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.517813 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e16537b0-b66e-4bad-a481-9d2755cf6eb5/ovsdbserver-sb/0.log" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.517917 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.530345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.530721 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.531429 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" containerID="cri-o://70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.531679 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" containerID="cri-o://33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551316 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8a49e0c-0043-4326-b478-981d19e6480b/ovsdbserver-nb/0.log" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551464 4979 generic.go:334] "Generic (PLEG): container finished" podID="e8a49e0c-0043-4326-b478-981d19e6480b" containerID="e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d" exitCode=143 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551954 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551997 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerDied","Data":"e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.560091 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerID="8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66" exitCode=1 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.560186 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerDied","Data":"8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.565638 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.568340 4979 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-nxlz6" secret="" err="secret \"galera-openstack-cell1-dockercfg-wj9ck\" not found" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.568429 4979 scope.go:117] "RemoveContainer" containerID="8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.580153 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591652 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591725 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591766 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591828 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591968 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.592066 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.592152 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.592253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.594933 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.597237 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.597521 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" containerID="cri-o://0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.597627 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" containerID="cri-o://1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.599813 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts" (OuterVolumeSpecName: "scripts") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.600170 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config" (OuterVolumeSpecName: "config") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.603349 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.616002 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.618219 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.618257 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619208 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619237 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619247 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619258 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619265 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619282 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619289 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619295 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619301 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619308 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619315 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619322 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619369 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619540 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619558 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619622 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619649 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619714 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619738 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.629541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6" (OuterVolumeSpecName: "kube-api-access-5lsh6") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "kube-api-access-5lsh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.634718 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.635132 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" containerID="cri-o://d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.635365 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" containerID="cri-o://9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.639085 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.639587 4979 generic.go:334] "Generic (PLEG): container finished" podID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerID="80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1" exitCode=2 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.639973 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerDied","Data":"80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.647443 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.656100 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.662443 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.664171 4979 generic.go:334] "Generic (PLEG): container finished" podID="4bae0355-ad11-48d3-a13f-378354677f77" containerID="68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.664247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerDied","Data":"68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.686143 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.686281 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.698395 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.709789 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.709897 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.709873419 +0000 UTC m=+1595.671120452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.709970 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710011 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710022 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710049 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710058 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710068 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710089 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710099 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.710353 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.710450 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:36.210420723 +0000 UTC m=+1592.171667926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710746 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.711243 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" containerID="cri-o://3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.711846 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" containerID="cri-o://10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.737287 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.800410 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.800895 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" containerID="cri-o://edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.801135 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" containerID="cri-o://b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.819017 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.819801 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.819876 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.819851348 +0000 UTC m=+1593.781098571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.820898 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.828319 4979 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 22:06:35 crc kubenswrapper[4979]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 22:06:35 crc kubenswrapper[4979]: + source /usr/local/bin/container-scripts/functions Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNBridge=br-int Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNRemote=tcp:localhost:6642 Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNEncapType=geneve Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNAvailabilityZones= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ EnableChassisAsGateway=true Jan 30 22:06:35 crc kubenswrapper[4979]: ++ PhysicalNetworks= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNHostName= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 22:06:35 crc kubenswrapper[4979]: ++ ovs_dir=/var/lib/openvswitch Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 22:06:35 crc kubenswrapper[4979]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + cleanup_ovsdb_server_semaphore Jan 30 22:06:35 crc kubenswrapper[4979]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 22:06:35 crc kubenswrapper[4979]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-tmjt2" message=< Jan 30 22:06:35 crc kubenswrapper[4979]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 22:06:35 crc kubenswrapper[4979]: + source /usr/local/bin/container-scripts/functions Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNBridge=br-int Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNRemote=tcp:localhost:6642 Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNEncapType=geneve Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNAvailabilityZones= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ EnableChassisAsGateway=true Jan 30 22:06:35 crc kubenswrapper[4979]: ++ PhysicalNetworks= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNHostName= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 22:06:35 crc kubenswrapper[4979]: ++ ovs_dir=/var/lib/openvswitch Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 22:06:35 crc kubenswrapper[4979]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + cleanup_ovsdb_server_semaphore Jan 30 22:06:35 crc kubenswrapper[4979]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 22:06:35 crc kubenswrapper[4979]: > Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.828376 4979 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 22:06:35 crc kubenswrapper[4979]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 22:06:35 crc kubenswrapper[4979]: + source /usr/local/bin/container-scripts/functions Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNBridge=br-int Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNRemote=tcp:localhost:6642 Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNEncapType=geneve Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNAvailabilityZones= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ EnableChassisAsGateway=true Jan 30 22:06:35 crc kubenswrapper[4979]: ++ PhysicalNetworks= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNHostName= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 22:06:35 crc kubenswrapper[4979]: ++ ovs_dir=/var/lib/openvswitch Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 22:06:35 crc kubenswrapper[4979]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + cleanup_ovsdb_server_semaphore Jan 30 22:06:35 crc kubenswrapper[4979]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 22:06:35 crc kubenswrapper[4979]: > pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" containerID="cri-o://2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828430 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" containerID="cri-o://2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" gracePeriod=29 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828588 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828833 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" containerID="cri-o://2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828925 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" containerID="cri-o://99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.895765 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.896379 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" containerID="cri-o://748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.896506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.896681 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" containerID="cri-o://5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923337 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923616 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.923414 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923492 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923841 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.923645021 +0000 UTC m=+1595.884892224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.924278 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.924416 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.924400701 +0000 UTC m=+1593.885647904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.924521 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.924506964 +0000 UTC m=+1595.885754147 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.949423 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.966751 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.985450 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.002648 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" containerID="cri-o://ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" gracePeriod=29 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.008303 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.029338 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.029383 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.061295 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.134854 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8a49e0c-0043-4326-b478-981d19e6480b/ovsdbserver-nb/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.134964 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.179126 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.234869 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244395 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244541 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244572 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244645 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244709 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244734 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.245959 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.247208 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.247148676 +0000 UTC m=+1593.208395719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.252308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config" (OuterVolumeSpecName: "config") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.256117 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.261706 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts" (OuterVolumeSpecName: "scripts") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.276852 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.293481 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.295557 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.300050 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz" (OuterVolumeSpecName: "kube-api-access-r7hdz") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "kube-api-access-r7hdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.316396 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.331804 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.332166 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" containerID="cri-o://d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.380928 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.380975 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.380998 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.381010 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.384190 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.392137 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.443175 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.443614 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.454658 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lz8zj_817d8847-f022-4837-834f-a0e4b124f7ea/openstack-network-exporter/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.454789 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.471360 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.487434 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.528665 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.529314 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.546451 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.563207 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.614455 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" containerID="cri-o://b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617102 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617312 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617488 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618001 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618069 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618093 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618186 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618682 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.637212 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.637286 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.637326 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618257 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.620801 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.632326 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.639302 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config" (OuterVolumeSpecName: "config") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646850 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646893 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646910 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646925 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646934 4979 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.677861 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj" (OuterVolumeSpecName: "kube-api-access-bsbsj") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "kube-api-access-bsbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.697962 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.703156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl" (OuterVolumeSpecName: "kube-api-access-72jjl") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "kube-api-access-72jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.717697 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.729067 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.742589 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.743497 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.743523 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.744059 4979 generic.go:334] "Generic (PLEG): container finished" podID="94177def-b41a-4af1-bcce-a0673da9f81c" containerID="0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.744140 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerDied","Data":"0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.755477 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.755510 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.755585 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.755644 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:40.755624889 +0000 UTC m=+1596.716871922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.757548 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.781430 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.781786 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.788322 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.801257 4979 generic.go:334] "Generic (PLEG): container finished" podID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerID="4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.801407 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerDied","Data":"4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3"} Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.806603 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.806689 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810353 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8a49e0c-0043-4326-b478-981d19e6480b/ovsdbserver-nb/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810535 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerDied","Data":"1ba7eb4e73d21b76aae2c54799684c5d1a7e13a849894846bd2ade424074662c"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810604 4979 scope.go:117] "RemoveContainer" containerID="9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810980 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.824693 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.829137 4979 generic.go:334] "Generic (PLEG): container finished" podID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerID="d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.829227 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerDied","Data":"d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.839670 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.840865 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kzdcz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" podUID="f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.847850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.848859 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.874581 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.894506 4979 generic.go:334] "Generic (PLEG): container finished" podID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerID="998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e" exitCode=0 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.894589 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerDied","Data":"998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.895810 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.915412 4979 generic.go:334] "Generic (PLEG): container finished" podID="b0baa205-eff4-4cad-a27f-db3599bba092" containerID="2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.915805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerDied","Data":"2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.931137 4979 generic.go:334] "Generic (PLEG): container finished" podID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerID="748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.931484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerDied","Data":"748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.934604 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerDied","Data":"fcd7f766ab345ea2e8c0ac6bd8fb4c89c2192ee2d80ef64d952c822915831fd5"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.935108 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.945318 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.952473 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.953247 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" containerID="cri-o://e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.969745 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" containerID="cri-o://eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131" gracePeriod=604800 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.971256 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e16537b0-b66e-4bad-a481-9d2755cf6eb5/ovsdbserver-sb/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.971347 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerDied","Data":"6de0f04b65ae33fad502fd47c75940202442c98e117caa698fb7adad6b0870b8"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.971470 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.990269 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.990583 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.998341 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.998408 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:40.99838849 +0000 UTC m=+1596.959635523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.999825 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.008385 4979 generic.go:334] "Generic (PLEG): container finished" podID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerID="3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.008514 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerDied","Data":"3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45"} Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.022095 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "glance" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="glance" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.030148 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-b6e4-account-create-update-6c4qp" podUID="fe035ddd-73a5-43fd-8b1d-343447e1f850" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.031879 4979 generic.go:334] "Generic (PLEG): container finished" podID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" exitCode=0 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.034632 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerDied","Data":"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.062854 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.064751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config" (OuterVolumeSpecName: "config") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.078166 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerID="edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.104529 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.104571 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.104581 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.105970 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.163005 4979 generic.go:334] "Generic (PLEG): container finished" podID="44df4390-d39d-42b7-904c-99d3e9680768" containerID="2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.165845 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" path="/var/lib/kubelet/pods/11b3f71c-0345-4261-8d0c-e7d700eb2932/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.167940 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" path="/var/lib/kubelet/pods/170f93fa-8e66-4ae0-ab49-b2db51c1afa5/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.169116 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" path="/var/lib/kubelet/pods/175f02fa-3089-4350-a658-c939f6e6ef9f/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.171016 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" path="/var/lib/kubelet/pods/181d93b8-d7d4-4184-beb4-f4e96f221af5/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.175497 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" path="/var/lib/kubelet/pods/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.180886 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd39b08-adf2-44da-b301-8e8694590426" path="/var/lib/kubelet/pods/6dd39b08-adf2-44da-b301-8e8694590426/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.183946 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" path="/var/lib/kubelet/pods/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.186131 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" path="/var/lib/kubelet/pods/82508003-60c8-463b-92a9-bc9521fcfa03/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.192498 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" path="/var/lib/kubelet/pods/83840d8c-fe62-449c-a3ab-5404215dce87/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: W0130 22:06:37.205091 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7f5965_9d27_4649_bb8f_9e99a57c0362.slice/crio-7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139 WatchSource:0}: Error finding container 7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139: Status 404 returned error can't find the container with id 7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.205266 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.207550 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.207596 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.215796 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" path="/var/lib/kubelet/pods/a2df91e7-6710-4ee4-a671-4b19dc5c2798/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.222022 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" path="/var/lib/kubelet/pods/bc0c5054-9597-4b94-a1d6-1f424c1d6de4/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.222230 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" exitCode=0 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.222742 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" path="/var/lib/kubelet/pods/f8b67e98-62a7-4a61-835e-8b7ec20167f3/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.230016 4979 scope.go:117] "RemoveContainer" containerID="e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231021 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "placement" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="placement" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231335 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "nova_cell0" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="nova_cell0" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231577 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "neutron" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="neutron" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231789 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "nova_api" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="nova_api" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232409 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" podUID="8573fb5d-0536-4182-95b7-f8d0a16ce994" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232451 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-0121-account-create-update-cjfbd" podUID="5d7f5965-9d27-4649-bb8f-9e99a57c0362" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232616 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-d511-account-create-update-gfm26" podUID="e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232885 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-1082-account-create-update-vm4l4" podUID="b0f67cef-fc43-42c0-967e-d51d1730b419" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.242915 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lz8zj_817d8847-f022-4837-834f-a0e4b124f7ea/openstack-network-exporter/0.log" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.243075 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.260709 4979 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-nxlz6" secret="" err="secret \"galera-openstack-cell1-dockercfg-wj9ck\" not found" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.260778 4979 scope.go:117] "RemoveContainer" containerID="23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.261074 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-nxlz6_openstack(2ae1b557-b27a-4331-8c91-bb1934e91fce)\"" pod="openstack/root-account-create-update-nxlz6" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261471 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerDied","Data":"edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261548 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261579 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerDied","Data":"2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261601 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261625 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerDied","Data":"4155908da65ed980762b6600d6cd531e31e34d1e8a5cf0688a19ba647961bebc"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerStarted","Data":"23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.289530 4979 generic.go:334] "Generic (PLEG): container finished" podID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerID="70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.290546 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerDied","Data":"70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.307163 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.315227 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.316393 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.317121 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.317172 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.317154018 +0000 UTC m=+1595.278401051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.321754 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.321869 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:41.321844254 +0000 UTC m=+1597.283091427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.329507 4979 scope.go:117] "RemoveContainer" containerID="68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.334691 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "cinder" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="cinder" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.336456 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-18a2-account-create-update-tgfqm" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389121 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389281 4979 scope.go:117] "RemoveContainer" containerID="cb53a0bf80799a9038c0ec96174830f51ef5adf97bb87b1dc554e2dbe52de608" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389468 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" containerID="cri-o://b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112" gracePeriod=30 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389686 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" containerID="cri-o://6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7" gracePeriod=30 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401133 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401627 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="init" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401646 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="init" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401665 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401671 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401691 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401698 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401720 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401727 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401734 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401740 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401758 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401764 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401780 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401789 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401966 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401985 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401996 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.402013 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.402019 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.402088 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.404562 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.410758 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.418685 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.418762 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:41.418741401 +0000 UTC m=+1597.379988434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.444544 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.448462 4979 scope.go:117] "RemoveContainer" containerID="ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.469987 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.481342 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.499736 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.508995 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.517941 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.521913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.521971 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.527576 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.543497 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.545907 4979 scope.go:117] "RemoveContainer" containerID="364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.553143 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.619485 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.627350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.627452 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.628822 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.630326 4979 scope.go:117] "RemoveContainer" containerID="bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.676010 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.693779 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.723992 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.734050 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.793177 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.802183 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:37.895790 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:37.896319 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:41.896302742 +0000 UTC m=+1597.857549775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.001082 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.001177 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:42.001153143 +0000 UTC m=+1597.962400176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.219378 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.221255 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.222916 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.222975 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.309521 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.325528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerDied","Data":"3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.325316 4979 generic.go:334] "Generic (PLEG): container finished" podID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerID="3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.335342 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-cjfbd" event={"ID":"5d7f5965-9d27-4649-bb8f-9e99a57c0362","Type":"ContainerStarted","Data":"7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139"} Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.346730 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.368703 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.377653 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.377735 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411086 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411472 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411828 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411925 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.424884 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh" (OuterVolumeSpecName: "kube-api-access-t2vbh") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "kube-api-access-t2vbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.439206 4979 generic.go:334] "Generic (PLEG): container finished" podID="c04339fa-9eb7-4671-895b-ef768888add0" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.439323 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerDied","Data":"383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.449904 4979 generic.go:334] "Generic (PLEG): container finished" podID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerID="6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.449945 4979 generic.go:334] "Generic (PLEG): container finished" podID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerID="b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.450000 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerDied","Data":"6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.450096 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerDied","Data":"b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.464202 4979 generic.go:334] "Generic (PLEG): container finished" podID="f69eed38-4641-4703-8a87-93aedebfbff1" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.464276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerDied","Data":"e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395"} Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.465533 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.474818 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.478647 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.478706 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.482011 4979 generic.go:334] "Generic (PLEG): container finished" podID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerID="b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.482120 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerDied","Data":"b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.489378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-6c4qp" event={"ID":"fe035ddd-73a5-43fd-8b1d-343447e1f850","Type":"ContainerStarted","Data":"7c31f7e9f74ed851718e7a8c33feb2aea66305bcdf09aebc96b0e2bef13aabcb"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496223 4979 generic.go:334] "Generic (PLEG): container finished" podID="95748319-965e-49d8-8a00-c0bc1025337d" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496387 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerDied","Data":"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496439 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerDied","Data":"e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496549 4979 scope.go:117] "RemoveContainer" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496632 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.499102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data" (OuterVolumeSpecName: "config-data") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.502820 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.521713 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" event={"ID":"8573fb5d-0536-4182-95b7-f8d0a16ce994","Type":"ContainerStarted","Data":"2cdbf10e9bfd98e947cbb9f05f455256f57b0a782b7a6b3d4f66686e4d98d351"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.522081 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.522132 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.522144 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.545859 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.546343 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-gfm26" event={"ID":"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7","Type":"ContainerStarted","Data":"e8bb199dbcb75afb57868f8e864cd1c6d1708ad3328ad435676d6e46b226671a"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.558518 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.564020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-vm4l4" event={"ID":"b0f67cef-fc43-42c0-967e-d51d1730b419","Type":"ContainerStarted","Data":"5fa59fc53b68ec331969f337e5f543de6c0346b458c5cdf6ee3c5a15c6e73440"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.597250 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerID="23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a" exitCode=1 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.597371 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerDied","Data":"23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.598120 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.635843 4979 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.635880 4979 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.661279 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.751809 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.756621 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.764678 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.783074 4979 scope.go:117] "RemoveContainer" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.805491 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b\": container with ID starting with 4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b not found: ID does not exist" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.805541 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b"} err="failed to get container status \"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b\": rpc error: code = NotFound desc = could not find container \"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b\": container with ID starting with 4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b not found: ID does not exist" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.805571 4979 scope.go:117] "RemoveContainer" containerID="8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.887775 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:59192->10.217.0.168:8776: read: connection reset by peer" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.939383 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.953529 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.088317 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.088586 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.123604 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bae0355-ad11-48d3-a13f-378354677f77" path="/var/lib/kubelet/pods/4bae0355-ad11-48d3-a13f-378354677f77/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.124650 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" path="/var/lib/kubelet/pods/817d8847-f022-4837-834f-a0e4b124f7ea/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.127143 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95748319-965e-49d8-8a00-c0bc1025337d" path="/var/lib/kubelet/pods/95748319-965e-49d8-8a00-c0bc1025337d/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.130642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" path="/var/lib/kubelet/pods/e16537b0-b66e-4bad-a481-9d2755cf6eb5/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.134083 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" path="/var/lib/kubelet/pods/e8a49e0c-0043-4326-b478-981d19e6480b/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.175484 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.181220 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.184219 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.184304 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.411625 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.411720 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:43.411698239 +0000 UTC m=+1599.372945272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.455774 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:41584->10.217.0.208:8775: read: connection reset by peer" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.455991 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:41568->10.217.0.208:8775: read: connection reset by peer" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.632664 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.674746 4979 generic.go:334] "Generic (PLEG): container finished" podID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerID="33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.674881 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerDied","Data":"33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.684140 4979 generic.go:334] "Generic (PLEG): container finished" podID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerID="5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.684237 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerDied","Data":"5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.703056 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerID="b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.703165 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerDied","Data":"b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.728347 4979 generic.go:334] "Generic (PLEG): container finished" podID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerID="10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.728931 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerDied","Data":"10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729251 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729343 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729392 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729465 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729498 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729529 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729619 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.730357 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.730440 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:47.730416845 +0000 UTC m=+1603.691663878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.738155 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.743620 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.743923 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.757098 4979 generic.go:334] "Generic (PLEG): container finished" podID="44df4390-d39d-42b7-904c-99d3e9680768" containerID="99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.757225 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerDied","Data":"99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.757951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl" (OuterVolumeSpecName: "kube-api-access-l6wgl") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "kube-api-access-l6wgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.764115 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.777732 4979 generic.go:334] "Generic (PLEG): container finished" podID="b0baa205-eff4-4cad-a27f-db3599bba092" containerID="aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.777887 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerDied","Data":"aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.784541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.797369 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.798545 4979 generic.go:334] "Generic (PLEG): container finished" podID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerID="9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.798595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerDied","Data":"9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831899 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831931 4979 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831940 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831949 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831960 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831982 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831992 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.859243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerDied","Data":"5c5282dd71d589822510ea8f2d38d385c993be6f5e42e4d1471904abd0c28e55"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.859315 4979 scope.go:117] "RemoveContainer" containerID="b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.859471 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.866244 4979 generic.go:334] "Generic (PLEG): container finished" podID="94177def-b41a-4af1-bcce-a0673da9f81c" containerID="1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.866350 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerDied","Data":"1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.891143 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.895976 4979 generic.go:334] "Generic (PLEG): container finished" podID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerID="db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.896090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.897412 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerDied","Data":"db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.938357 4979 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938464 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938526 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:47.938508015 +0000 UTC m=+1603.899755048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938857 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938886 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:47.938879495 +0000 UTC m=+1603.900126518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.995656 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.996001 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.996148 4979 scope.go:117] "RemoveContainer" containerID="92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.016751 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.037964 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.118504 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.132490 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.156504 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" containerID="cri-o://5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.157559 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" containerID="cri-o://93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.157655 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" containerID="cri-o://fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.157718 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" containerID="cri-o://b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.162259 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.162322 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.202746 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.216163 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.268818 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.271309 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.271436 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.300076 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.334804 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww" (OuterVolumeSpecName: "kube-api-access-shqww") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "kube-api-access-shqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.365169 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.365692 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" containerID="cri-o://10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.384614 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.386074 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs" (OuterVolumeSpecName: "logs") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.384829 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"f69eed38-4641-4703-8a87-93aedebfbff1\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.387398 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.391636 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.391669 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.391693 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.403495 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj" (OuterVolumeSpecName: "kube-api-access-sdrzj") pod "f69eed38-4641-4703-8a87-93aedebfbff1" (UID: "f69eed38-4641-4703-8a87-93aedebfbff1"). InnerVolumeSpecName "kube-api-access-sdrzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.488951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data" (OuterVolumeSpecName: "config-data") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.495207 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"f69eed38-4641-4703-8a87-93aedebfbff1\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.495645 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"f69eed38-4641-4703-8a87-93aedebfbff1\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.497462 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.497484 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.505945 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.515735 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.532745 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.591299 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.595876 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data" (OuterVolumeSpecName: "config-data") pod "f69eed38-4641-4703-8a87-93aedebfbff1" (UID: "f69eed38-4641-4703-8a87-93aedebfbff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.599338 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.625107 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.633301 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69eed38-4641-4703-8a87-93aedebfbff1" (UID: "f69eed38-4641-4703-8a87-93aedebfbff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.636731 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.642220 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.655834 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.678820 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702441 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702517 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"c04339fa-9eb7-4671-895b-ef768888add0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702550 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"b0f67cef-fc43-42c0-967e-d51d1730b419\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702666 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"c04339fa-9eb7-4671-895b-ef768888add0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702908 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"b0f67cef-fc43-42c0-967e-d51d1730b419\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.703073 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"c04339fa-9eb7-4671-895b-ef768888add0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.704703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0f67cef-fc43-42c0-967e-d51d1730b419" (UID: "b0f67cef-fc43-42c0-967e-d51d1730b419"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.705362 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4fc1eef-47e7-4fdd-9642-da7ce95056e8" (UID: "d4fc1eef-47e7-4fdd-9642-da7ce95056e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706054 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706230 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706256 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706271 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.718839 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9" (OuterVolumeSpecName: "kube-api-access-7rlm9") pod "b0f67cef-fc43-42c0-967e-d51d1730b419" (UID: "b0f67cef-fc43-42c0-967e-d51d1730b419"). InnerVolumeSpecName "kube-api-access-7rlm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.721425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk" (OuterVolumeSpecName: "kube-api-access-f4sfk") pod "c04339fa-9eb7-4671-895b-ef768888add0" (UID: "c04339fa-9eb7-4671-895b-ef768888add0"). InnerVolumeSpecName "kube-api-access-f4sfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.732557 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw" (OuterVolumeSpecName: "kube-api-access-jgphw") pod "d4fc1eef-47e7-4fdd-9642-da7ce95056e8" (UID: "d4fc1eef-47e7-4fdd-9642-da7ce95056e8"). InnerVolumeSpecName "kube-api-access-jgphw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.775601 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c04339fa-9eb7-4671-895b-ef768888add0" (UID: "c04339fa-9eb7-4671-895b-ef768888add0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.805293 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data" (OuterVolumeSpecName: "config-data") pod "c04339fa-9eb7-4671-895b-ef768888add0" (UID: "c04339fa-9eb7-4671-895b-ef768888add0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807512 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807590 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807658 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807888 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808005 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808098 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"2ae1b557-b27a-4331-8c91-bb1934e91fce\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808150 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808192 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808306 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808332 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808428 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"2ae1b557-b27a-4331-8c91-bb1934e91fce\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808467 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808504 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808968 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808982 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808992 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.809001 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.809010 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: E0130 22:06:40.809097 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:40 crc kubenswrapper[4979]: E0130 22:06:40.809152 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:48.809135852 +0000 UTC m=+1604.770382885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.811846 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.814525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.818010 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g" (OuterVolumeSpecName: "kube-api-access-s998g") pod "2ae1b557-b27a-4331-8c91-bb1934e91fce" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce"). InnerVolumeSpecName "kube-api-access-s998g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.818075 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.825289 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk" (OuterVolumeSpecName: "kube-api-access-x2cqk") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "kube-api-access-x2cqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.828817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ae1b557-b27a-4331-8c91-bb1934e91fce" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.829683 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.830190 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.839975 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx" (OuterVolumeSpecName: "kube-api-access-d4nsx") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "kube-api-access-d4nsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.846326 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts" (OuterVolumeSpecName: "scripts") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.893326 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data" (OuterVolumeSpecName: "config-data") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.910309 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.910589 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911226 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911251 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911264 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911278 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911289 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911306 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: W0130 22:06:40.911304 4979 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/21dfd874-e50d-4e61-a634-9f47ee92ff4f/volumes/kubernetes.io~secret/combined-ca-bundle Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911319 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911327 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911334 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911370 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911383 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911421 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.919138 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-gfm26" event={"ID":"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7","Type":"ContainerDied","Data":"e8bb199dbcb75afb57868f8e864cd1c6d1708ad3328ad435676d6e46b226671a"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.919524 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8bb199dbcb75afb57868f8e864cd1c6d1708ad3328ad435676d6e46b226671a" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.933577 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.939493 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" exitCode=0 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.939818 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" exitCode=2 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.939991 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" exitCode=0 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.940346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.940494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.940647 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.946464 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-6c4qp" event={"ID":"fe035ddd-73a5-43fd-8b1d-343447e1f850","Type":"ContainerDied","Data":"7c31f7e9f74ed851718e7a8c33feb2aea66305bcdf09aebc96b0e2bef13aabcb"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.946526 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c31f7e9f74ed851718e7a8c33feb2aea66305bcdf09aebc96b0e2bef13aabcb" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.946655 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.952165 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerID="10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755" exitCode=2 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.952279 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerDied","Data":"10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.966108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerDied","Data":"7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.966176 4979 scope.go:117] "RemoveContainer" containerID="998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.966320 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.972824 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-cjfbd" event={"ID":"5d7f5965-9d27-4649-bb8f-9e99a57c0362","Type":"ContainerDied","Data":"7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.972955 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.976394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-vm4l4" event={"ID":"b0f67cef-fc43-42c0-967e-d51d1730b419","Type":"ContainerDied","Data":"5fa59fc53b68ec331969f337e5f543de6c0346b458c5cdf6ee3c5a15c6e73440"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.976470 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.983679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerDied","Data":"9644ea1b50d881a5fc87efbeb25d5fe3195c9de5bf0f6fd1b1d5b2e65c2a5124"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.983835 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.986677 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" event={"ID":"8573fb5d-0536-4182-95b7-f8d0a16ce994","Type":"ContainerDied","Data":"2cdbf10e9bfd98e947cbb9f05f455256f57b0a782b7a6b3d4f66686e4d98d351"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.986829 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cdbf10e9bfd98e947cbb9f05f455256f57b0a782b7a6b3d4f66686e4d98d351" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.987815 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.998126 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerDied","Data":"3dde96c5169697a3e0c9d8b160bc83a4fafb1d44e05b294c10a09b1f06d958c9"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.998322 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.018056 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.019617 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.019715 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.019833 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.018288 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.020076 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.020043987 +0000 UTC m=+1604.981291080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.021514 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.022085 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data" (OuterVolumeSpecName: "config-data") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.022254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerDied","Data":"44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.022332 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.024289 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.024716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerDied","Data":"b83c4ed8bbda19ed5aa54ca0fc84bb29d05f7a78681b54738255e43bd19127ba"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.024795 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.038526 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.038543 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-tgfqm" event={"ID":"d4fc1eef-47e7-4fdd-9642-da7ce95056e8","Type":"ContainerDied","Data":"d9676cb7e0eb5ddecab92aeb166656644b6133c3cd8ff91f6626cb611a3b2256"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.045025 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerDied","Data":"b64735411ca3cd7394e31868ccdaa7a77e584aec6259c66bd68d292da88aa3c5"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.045477 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.074084 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.076868 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.082398 4979 scope.go:117] "RemoveContainer" containerID="3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.121845 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.121927 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"8573fb5d-0536-4182-95b7-f8d0a16ce994\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.122206 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"8573fb5d-0536-4182-95b7-f8d0a16ce994\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.122264 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.127995 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d7f5965-9d27-4649-bb8f-9e99a57c0362" (UID: "5d7f5965-9d27-4649-bb8f-9e99a57c0362"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.134132 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.136088 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.134215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8573fb5d-0536-4182-95b7-f8d0a16ce994" (UID: "8573fb5d-0536-4182-95b7-f8d0a16ce994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.141546 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv" (OuterVolumeSpecName: "kube-api-access-9dfzv") pod "8573fb5d-0536-4182-95b7-f8d0a16ce994" (UID: "8573fb5d-0536-4182-95b7-f8d0a16ce994"). InnerVolumeSpecName "kube-api-access-9dfzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.141717 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv" (OuterVolumeSpecName: "kube-api-access-hfxkv") pod "5d7f5965-9d27-4649-bb8f-9e99a57c0362" (UID: "5d7f5965-9d27-4649-bb8f-9e99a57c0362"). InnerVolumeSpecName "kube-api-access-hfxkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.144501 4979 scope.go:117] "RemoveContainer" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.236876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237072 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"fe035ddd-73a5-43fd-8b1d-343447e1f850\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"fe035ddd-73a5-43fd-8b1d-343447e1f850\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237235 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237861 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237884 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237898 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.238546 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe035ddd-73a5-43fd-8b1d-343447e1f850" (UID: "fe035ddd-73a5-43fd-8b1d-343447e1f850"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.256763 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv" (OuterVolumeSpecName: "kube-api-access-gcnjv") pod "fe035ddd-73a5-43fd-8b1d-343447e1f850" (UID: "fe035ddd-73a5-43fd-8b1d-343447e1f850"). InnerVolumeSpecName "kube-api-access-gcnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.257113 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" (UID: "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.275182 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj" (OuterVolumeSpecName: "kube-api-access-hhspj") pod "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" (UID: "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7"). InnerVolumeSpecName "kube-api-access-hhspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343148 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343181 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343191 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343201 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.346117 4979 scope.go:117] "RemoveContainer" containerID="1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.700444 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.701260 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.701529 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.701557 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.702691 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.704949 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.706370 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.706454 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.823202 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.905120 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" path="/var/lib/kubelet/pods/51b68702-8d5d-43f3-b4e7-936ceb5de933/volumes" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.907257 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2" path="/var/lib/kubelet/pods/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2/volumes" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910023 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910078 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910350 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910372 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910393 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910408 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910426 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910446 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910461 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910475 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910495 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910514 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.910943 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910961 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.910979 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910990 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911006 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911015 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911028 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="mysql-bootstrap" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911669 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="mysql-bootstrap" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911685 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911695 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911709 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911717 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911729 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911737 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911752 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911759 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911781 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911789 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911800 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911809 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911822 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911832 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911845 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911854 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911864 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911871 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912138 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912156 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912173 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912188 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912204 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912213 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912223 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912232 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912243 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912258 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912269 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912284 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913086 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913107 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913122 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913138 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913155 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913170 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913190 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913215 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913232 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.915023 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" containerID="cri-o://11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a" gracePeriod=30 Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.915280 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.915757 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-f5778c484-5rg8p" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" containerID="cri-o://dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db" gracePeriod=30 Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.918519 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.932005 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.940081 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.93961172 +0000 UTC m=+1605.900858743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.942251 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.985629 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.000622 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 22:06:42 crc kubenswrapper[4979]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 22:06:42 crc kubenswrapper[4979]: > Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.033483 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.033581 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.033813 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.033939 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:50.033908598 +0000 UTC m=+1605.995155811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.069482 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerDied","Data":"bb24789e94c037f8d2c30cb247391e1793581183cde1ad3d02b4c483f6507c5b"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.069645 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb24789e94c037f8d2c30cb247391e1793581183cde1ad3d02b4c483f6507c5b" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.087363 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerDied","Data":"ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.087447 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.094614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerDied","Data":"990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.094680 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.098670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerDied","Data":"c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.098721 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.101814 4979 generic.go:334] "Generic (PLEG): container finished" podID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerID="32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2" exitCode=0 Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.101869 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerDied","Data":"32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.104600 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerDied","Data":"1ef7dfba2654b435b80b29127f1c9700a1f54fff7b56b29307a2ed4beab2ff4b"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.104635 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef7dfba2654b435b80b29127f1c9700a1f54fff7b56b29307a2ed4beab2ff4b" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.113858 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerDied","Data":"e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.113912 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.134489 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerDied","Data":"63cab1632ab5734414fe0ad9e4d6c6c07d6d67f4ee2af410de1ca78ec4b0eb26"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.134550 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63cab1632ab5734414fe0ad9e4d6c6c07d6d67f4ee2af410de1ca78ec4b0eb26" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.136637 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.136700 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.136843 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.136906 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:42.636887279 +0000 UTC m=+1598.598134312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.141369 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.141473 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:42.641445981 +0000 UTC m=+1598.602693014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.143705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerDied","Data":"d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.143784 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.143728 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.160960 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161138 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerDied","Data":"310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161274 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161960 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.162343 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.173118 4979 scope.go:117] "RemoveContainer" containerID="0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.173436 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.173452 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.183928 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.193686 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.202378 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.215233 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.227585 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.227704 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.232788 4979 scope.go:117] "RemoveContainer" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238087 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238157 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238330 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238356 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238570 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238636 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.239790 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs" (OuterVolumeSpecName: "logs") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.245401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2" (OuterVolumeSpecName: "kube-api-access-fw6m2") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "kube-api-access-fw6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.248297 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.249500 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.254807 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" containerID="cri-o://62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" gracePeriod=30 Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.290553 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.292500 4979 scope.go:117] "RemoveContainer" containerID="23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.319603 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.328515 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.329540 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345451 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345505 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345568 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data" (OuterVolumeSpecName: "config-data") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345705 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345800 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345866 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345933 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345971 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346019 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346257 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346379 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346460 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346539 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346563 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346588 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs" (OuterVolumeSpecName: "logs") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347614 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347910 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347992 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348095 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348191 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348262 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348332 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.350566 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.355348 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cn72x operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-bb3f-account-create-update-f78xh" podUID="ba12ac60-82de-4c7b-9411-4f36b0aedf3b" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.357842 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs" (OuterVolumeSpecName: "logs") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.358155 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl" (OuterVolumeSpecName: "kube-api-access-4zhrl") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "kube-api-access-4zhrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.358999 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs" (OuterVolumeSpecName: "logs") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.359156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.360901 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.380448 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns" (OuterVolumeSpecName: "kube-api-access-qxxns") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "kube-api-access-qxxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.387143 4979 scope.go:117] "RemoveContainer" containerID="6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.388005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg" (OuterVolumeSpecName: "kube-api-access-txcpg") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "kube-api-access-txcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.415184 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts" (OuterVolumeSpecName: "scripts") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.416667 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.417150 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460018 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460402 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460450 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460547 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460573 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460600 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460652 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460682 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460727 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460749 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460789 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460834 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460861 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460895 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460940 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461044 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461065 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461104 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461139 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461204 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461221 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461305 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461844 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461863 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461873 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461886 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461896 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461907 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461917 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461929 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.462271 4979 scope.go:117] "RemoveContainer" containerID="b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.462589 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.464583 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs" (OuterVolumeSpecName: "logs") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.476246 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.477464 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs" (OuterVolumeSpecName: "logs") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.479893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data" (OuterVolumeSpecName: "config-data") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.481715 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.486161 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs" (OuterVolumeSpecName: "logs") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.522119 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw" (OuterVolumeSpecName: "kube-api-access-r9nbw") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "kube-api-access-r9nbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.552162 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.563160 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564249 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564570 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564658 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564730 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564961 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565127 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565310 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565444 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565676 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.566698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs" (OuterVolumeSpecName: "logs") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567778 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567802 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567818 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567828 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567838 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567848 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567862 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567871 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.568080 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.570118 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.577226 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8" (OuterVolumeSpecName: "kube-api-access-df5k8") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "kube-api-access-df5k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.580106 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.580172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.584999 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts" (OuterVolumeSpecName: "scripts") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.595306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts" (OuterVolumeSpecName: "scripts") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.598305 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.605312 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65" (OuterVolumeSpecName: "kube-api-access-v8x65") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "kube-api-access-v8x65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.643769 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg" (OuterVolumeSpecName: "kube-api-access-bv5tg") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "kube-api-access-bv5tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.748401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts" (OuterVolumeSpecName: "scripts") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.748328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn" (OuterVolumeSpecName: "kube-api-access-vp9nn") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "kube-api-access-vp9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760337 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760463 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760797 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760815 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760851 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760873 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760888 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760908 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760921 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760933 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760958 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760971 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760984 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.764655 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.764805 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:43.764779064 +0000 UTC m=+1599.726026107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.765943 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.795211 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data" (OuterVolumeSpecName: "config-data") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.795541 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.795609 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:43.795581863 +0000 UTC m=+1599.756828896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.853391 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.863384 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.896395 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.914303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.921882 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.968242 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.001225 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:43 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: if [ -n "" ]; then Jan 30 22:06:43 crc kubenswrapper[4979]: GRANT_DATABASE="" Jan 30 22:06:43 crc kubenswrapper[4979]: else Jan 30 22:06:43 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:43 crc kubenswrapper[4979]: fi Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:43 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:43 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:43 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:43 crc kubenswrapper[4979]: # support updates Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.002994 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-czjz7" podUID="103e7f4c-fbf4-471c-9e8f-dbb281d59de1" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.006055 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.017200 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.053725 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.110830 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.116739 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.128071 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.137060 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.148573 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data" (OuterVolumeSpecName: "config-data") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.161102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.177310 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.181673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190756 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190804 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190821 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190842 4979 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190858 4979 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190873 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190890 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190904 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.192355 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data" (OuterVolumeSpecName: "config-data") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.197896 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.203186 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" path="/var/lib/kubelet/pods/21dfd874-e50d-4e61-a634-9f47ee92ff4f/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.204536 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" path="/var/lib/kubelet/pods/2ae1b557-b27a-4331-8c91-bb1934e91fce/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.209857 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" path="/var/lib/kubelet/pods/4320dd9b-0e3c-474b-bb1a-e00a72ae2938/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.212537 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7f5965-9d27-4649-bb8f-9e99a57c0362" path="/var/lib/kubelet/pods/5d7f5965-9d27-4649-bb8f-9e99a57c0362/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.213270 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" path="/var/lib/kubelet/pods/81fec9c6-beaa-4731-b527-51284f88fb92/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.216512 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8573fb5d-0536-4182-95b7-f8d0a16ce994" path="/var/lib/kubelet/pods/8573fb5d-0536-4182-95b7-f8d0a16ce994/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.218462 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" path="/var/lib/kubelet/pods/94177def-b41a-4af1-bcce-a0673da9f81c/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.219523 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" path="/var/lib/kubelet/pods/9686aad4-f2a7-4878-ae8b-f6142e93703a/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.220622 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f67cef-fc43-42c0-967e-d51d1730b419" path="/var/lib/kubelet/pods/b0f67cef-fc43-42c0-967e-d51d1730b419/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.221239 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" path="/var/lib/kubelet/pods/b4e29508-bcd2-4f07-807c-dde529c4fa24/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.222707 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04339fa-9eb7-4671-895b-ef768888add0" path="/var/lib/kubelet/pods/c04339fa-9eb7-4671-895b-ef768888add0/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.223760 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" path="/var/lib/kubelet/pods/d4fc1eef-47e7-4fdd-9642-da7ce95056e8/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.224487 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" path="/var/lib/kubelet/pods/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.225020 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" path="/var/lib/kubelet/pods/f69eed38-4641-4703-8a87-93aedebfbff1/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.228331 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac7007d-8147-477c-a42e-2463290030ff" path="/var/lib/kubelet/pods/fac7007d-8147-477c-a42e-2463290030ff/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.229555 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe035ddd-73a5-43fd-8b1d-343447e1f850" path="/var/lib/kubelet/pods/fe035ddd-73a5-43fd-8b1d-343447e1f850/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.231196 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.232500 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.232832 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234378 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234478 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234729 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234782 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234998 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.235317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.235729 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.243852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.266063 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data" (OuterVolumeSpecName: "config-data") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.289175 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292814 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292849 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292860 4979 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292874 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292885 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.293241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.301574 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.332522 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.374787 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.394924 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.395015 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.395041 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.395051 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.405043 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.430260 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.442217 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data" (OuterVolumeSpecName: "config-data") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.446963 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.447363 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.451749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499146 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499233 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499248 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499261 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499273 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499309 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.503401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.544258 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data" (OuterVolumeSpecName: "config-data") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.604006 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.604531 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.714166 4979 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 22:06:43 crc kubenswrapper[4979]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T22:06:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 22:06:43 crc kubenswrapper[4979]: /etc/init.d/functions: line 589: 477 Alarm clock "$@" Jan 30 22:06:43 crc kubenswrapper[4979]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-kxk8g" message=< Jan 30 22:06:43 crc kubenswrapper[4979]: Exiting ovn-controller (1) [FAILED] Jan 30 22:06:43 crc kubenswrapper[4979]: Killing ovn-controller (1) [ OK ] Jan 30 22:06:43 crc kubenswrapper[4979]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 30 22:06:43 crc kubenswrapper[4979]: 2026-01-30T22:06:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 22:06:43 crc kubenswrapper[4979]: /etc/init.d/functions: line 589: 477 Alarm clock "$@" Jan 30 22:06:43 crc kubenswrapper[4979]: > Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.714259 4979 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 22:06:43 crc kubenswrapper[4979]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T22:06:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 22:06:43 crc kubenswrapper[4979]: /etc/init.d/functions: line 589: 477 Alarm clock "$@" Jan 30 22:06:43 crc kubenswrapper[4979]: > pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" containerID="cri-o://2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.714318 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" containerID="cri-o://2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" gracePeriod=21 Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.824699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-czjz7" event={"ID":"103e7f4c-fbf4-471c-9e8f-dbb281d59de1","Type":"ContainerStarted","Data":"335f4b094e47edce7c0b5be42fdbe6f236f4c3629392ba85436681dc5052e8e7"} Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.824743 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerDied","Data":"b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff"} Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.824762 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.825916 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.825968 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.826259 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.826349 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:45.826328719 +0000 UTC m=+1601.787575752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.836451 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.836562 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:45.836534503 +0000 UTC m=+1601.797781536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.895594 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.927905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.927982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928023 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928086 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928123 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928159 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928215 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928310 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928337 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928390 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928487 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.934510 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.937831 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.943276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info" (OuterVolumeSpecName: "pod-info") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.944342 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.955779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.957241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.959664 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j" (OuterVolumeSpecName: "kube-api-access-h8t7j") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "kube-api-access-h8t7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.961310 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.989713 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data" (OuterVolumeSpecName: "config-data") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.006823 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf" (OuterVolumeSpecName: "server-conf") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040350 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040397 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040408 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040416 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040426 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040438 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040473 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040484 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040496 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040506 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.128952 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.143861 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.198784 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.214745 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.227547 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.231689 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.234215 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.234400 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.235106 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.235148 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.247951 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.272701 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.291136 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kxk8g_5e0b30c9-4972-4476-90e8-eec8d5d44ce5/ovn-controller/0.log" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.291244 4979 generic.go:334] "Generic (PLEG): container finished" podID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerID="2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" exitCode=137 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.291417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerDied","Data":"2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.294488 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.295597 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.319867 4979 generic.go:334] "Generic (PLEG): container finished" podID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerID="eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131" exitCode=0 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.320229 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerDied","Data":"eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.320366 4979 scope.go:117] "RemoveContainer" containerID="eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.325014 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.334850 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e7cc7cf6-3592-4e25-9578-27ae56d6909b/ovn-northd/0.log" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.335490 4979 generic.go:334] "Generic (PLEG): container finished" podID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" exitCode=139 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.335626 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerDied","Data":"e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.348880 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349610 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349808 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349910 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350010 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350109 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350230 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350300 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350420 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.352440 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.354787 4979 generic.go:334] "Generic (PLEG): container finished" podID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerID="11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a" exitCode=0 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.354932 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerDied","Data":"11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.367394 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.373397 4979 scope.go:117] "RemoveContainer" containerID="d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.373446 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.388196 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.388594 4979 generic.go:334] "Generic (PLEG): container finished" podID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" exitCode=0 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.388838 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.389394 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.389489 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerDied","Data":"d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.390474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info" (OuterVolumeSpecName: "pod-info") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.390661 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.397593 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.403367 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.406266 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl" (OuterVolumeSpecName: "kube-api-access-n7qvl") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "kube-api-access-n7qvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.406368 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.407273 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data" (OuterVolumeSpecName: "config-data") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.413346 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.420015 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.431487 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.442561 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.452385 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf" (OuterVolumeSpecName: "server-conf") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.452576 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: W0130 22:06:44.452675 4979 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e28a1e34-b97c-4090-adf8-fa3e2b766365/volumes/kubernetes.io~configmap/server-conf Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.452687 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf" (OuterVolumeSpecName: "server-conf") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453400 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453433 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453450 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453478 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453492 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453505 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453516 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453530 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453540 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453552 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453769 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.460283 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.475966 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.478770 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.488993 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.507794 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.513154 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.517303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.519772 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.528804 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.537870 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.540383 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.554728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.554798 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.555548 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.555568 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.556098 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "103e7f4c-fbf4-471c-9e8f-dbb281d59de1" (UID: "103e7f4c-fbf4-471c-9e8f-dbb281d59de1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.584519 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh" (OuterVolumeSpecName: "kube-api-access-k6sfh") pod "103e7f4c-fbf4-471c-9e8f-dbb281d59de1" (UID: "103e7f4c-fbf4-471c-9e8f-dbb281d59de1"). InnerVolumeSpecName "kube-api-access-k6sfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.665757 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.665923 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.665972 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666011 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666096 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666613 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666630 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.673618 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data" (OuterVolumeSpecName: "config-data") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.681970 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.687280 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r" (OuterVolumeSpecName: "kube-api-access-4mk5r") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "kube-api-access-4mk5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.738418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.768900 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.768973 4979 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.768986 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.769004 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.796200 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.871469 4979 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.912542 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.913164 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.913961 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.914084 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.006243 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.026128 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.172:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.026300 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.172:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.030148 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kxk8g_5e0b30c9-4972-4476-90e8-eec8d5d44ce5/ovn-controller/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.030417 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.033909 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e7cc7cf6-3592-4e25-9578-27ae56d6909b/ovn-northd/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.034711 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078365 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078511 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078579 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078619 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"2f627a1e-42e6-4af6-90f1-750c01bcf076\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078651 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078788 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078847 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078918 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079002 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079125 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079169 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"2f627a1e-42e6-4af6-90f1-750c01bcf076\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079411 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079457 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"2f627a1e-42e6-4af6-90f1-750c01bcf076\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079525 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.080271 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.081413 4979 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.087134 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts" (OuterVolumeSpecName: "scripts") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.088157 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config" (OuterVolumeSpecName: "config") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.088442 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.106519 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run" (OuterVolumeSpecName: "var-run") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.121835 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" path="/var/lib/kubelet/pods/3ae89cf4-f9f4-456b-947f-be87514b79ff/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.122764 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44df4390-d39d-42b7-904c-99d3e9680768" path="/var/lib/kubelet/pods/44df4390-d39d-42b7-904c-99d3e9680768/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.123624 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" path="/var/lib/kubelet/pods/54d2662c-bd60-4a08-accd-e30f0a51518c/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.125760 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.130804 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv" (OuterVolumeSpecName: "kube-api-access-mgffv") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "kube-api-access-mgffv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.133804 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" path="/var/lib/kubelet/pods/5c466a98-f01c-49ab-841a-8f35c54e71f3/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.134947 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" path="/var/lib/kubelet/pods/981f1fee-4d2a-4d80-bf38-80557b6c5033/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.138169 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr" (OuterVolumeSpecName: "kube-api-access-vcsgr") pod "2f627a1e-42e6-4af6-90f1-750c01bcf076" (UID: "2f627a1e-42e6-4af6-90f1-750c01bcf076"). InnerVolumeSpecName "kube-api-access-vcsgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.138470 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts" (OuterVolumeSpecName: "scripts") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.139578 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c" (OuterVolumeSpecName: "kube-api-access-5dn9c") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "kube-api-access-5dn9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.152245 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" path="/var/lib/kubelet/pods/aec2e945-509e-4cbb-9988-9f6cc840cd62/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.153117 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" path="/var/lib/kubelet/pods/b0baa205-eff4-4cad-a27f-db3599bba092/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.155567 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" path="/var/lib/kubelet/pods/c808d1a7-071b-4af7-b86d-adbc0e98803b/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.175687 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" path="/var/lib/kubelet/pods/cdfe8d13-8537-4477-ae9e-5c9aa6e104de/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.176416 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" path="/var/lib/kubelet/pods/fe5eba1b-535d-4519-97c5-5e8b8f003d96/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.199287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data" (OuterVolumeSpecName: "config-data") pod "2f627a1e-42e6-4af6-90f1-750c01bcf076" (UID: "2f627a1e-42e6-4af6-90f1-750c01bcf076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.199969 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200003 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200018 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200060 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200071 4979 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200082 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200091 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200100 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200111 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200185 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.209073 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f627a1e-42e6-4af6-90f1-750c01bcf076" (UID: "2f627a1e-42e6-4af6-90f1-750c01bcf076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.229513 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.236949 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.262629 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.281402 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.289226 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301653 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301689 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301702 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301717 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301730 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301744 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.421842 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422554 4979 generic.go:334] "Generic (PLEG): container finished" podID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" exitCode=0 Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422704 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerDied","Data":"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422747 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerDied","Data":"ca8441f7e30661b52f9821e4f8bade797db77f1bc59f74f658c35d0b1cade61a"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422770 4979 scope.go:117] "RemoveContainer" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.437944 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.438289 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-czjz7" event={"ID":"103e7f4c-fbf4-471c-9e8f-dbb281d59de1","Type":"ContainerDied","Data":"335f4b094e47edce7c0b5be42fdbe6f236f4c3629392ba85436681dc5052e8e7"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.450547 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kxk8g_5e0b30c9-4972-4476-90e8-eec8d5d44ce5/ovn-controller/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.450817 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.451074 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerDied","Data":"96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.454385 4979 generic.go:334] "Generic (PLEG): container finished" podID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" exitCode=0 Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.454441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerDied","Data":"62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.456018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerDied","Data":"a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.456153 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.458474 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e7cc7cf6-3592-4e25-9578-27ae56d6909b/ovn-northd/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.459169 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerDied","Data":"2bd740bd191cb301e1ace5a3abcf92c5ccb570c941fcbb8171a41eb9fdac51bb"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.459380 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.460779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerDied","Data":"07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.460938 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.469669 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerDied","Data":"bef9626e17c775699e3abae85cd19e88917b71194c8acdd56a70c42320faed2f"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.469785 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.484010 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerID="dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db" exitCode=0 Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.484225 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.484476 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerDied","Data":"dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504422 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504561 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504597 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504627 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504671 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504712 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504821 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.542378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb" (OuterVolumeSpecName: "kube-api-access-sb7gb") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "kube-api-access-sb7gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.549817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.574991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.606931 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.606954 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.606964 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.609916 4979 scope.go:117] "RemoveContainer" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.610743 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.620931 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.664123 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.667179 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.671276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.683863 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config" (OuterVolumeSpecName: "config") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.694025 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.699524 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709322 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709357 4979 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709372 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709381 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.906957 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c29874_a63d_4d35_a1a6_256d811ac6f8.slice/crio-dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.914175 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.914228 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.914368 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.914432 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.914411105 +0000 UTC m=+1605.875658148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.919241 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.919293 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.919280196 +0000 UTC m=+1605.880527229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.097363 4979 scope.go:117] "RemoveContainer" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.100460 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260\": container with ID starting with cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260 not found: ID does not exist" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.100526 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260"} err="failed to get container status \"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260\": rpc error: code = NotFound desc = could not find container \"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260\": container with ID starting with cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260 not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.100562 4979 scope.go:117] "RemoveContainer" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.101126 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a\": container with ID starting with 94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a not found: ID does not exist" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.101155 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a"} err="failed to get container status \"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a\": rpc error: code = NotFound desc = could not find container \"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a\": container with ID starting with 94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.101179 4979 scope.go:117] "RemoveContainer" containerID="2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.101211 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.116551 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.184238 4979 scope.go:117] "RemoveContainer" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.208708 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.214651 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235718 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235865 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235904 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235944 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236004 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236067 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236107 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236152 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236184 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236221 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236289 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236331 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236367 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236399 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.262428 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.266589 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.272330 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.274779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.281312 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.291973 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.294051 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.295606 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.299657 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql" (OuterVolumeSpecName: "kube-api-access-plqql") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "kube-api-access-plqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.300189 4979 scope.go:117] "RemoveContainer" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.303164 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts" (OuterVolumeSpecName: "scripts") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.316332 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg" (OuterVolumeSpecName: "kube-api-access-rqqfg") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "kube-api-access-rqqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.322448 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.324170 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.326978 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343748 4979 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343818 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343834 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343848 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343890 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343905 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343918 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343930 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343941 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343980 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.349999 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.358762 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.361783 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.367905 4979 scope.go:117] "RemoveContainer" containerID="80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.379106 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.387873 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.393234 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data" (OuterVolumeSpecName: "config-data") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.394672 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.399144 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.404673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.409660 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.410608 4979 scope.go:117] "RemoveContainer" containerID="11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.415509 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445056 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445122 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445180 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445279 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445327 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445460 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445502 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445869 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445898 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445910 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445920 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445931 4979 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445942 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445951 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445960 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.446496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.446810 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.450532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts" (OuterVolumeSpecName: "scripts") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.450675 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79" (OuterVolumeSpecName: "kube-api-access-7xc79") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "kube-api-access-7xc79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.468151 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.480005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.499812 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.513918 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.516918 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521415 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" exitCode=0 Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521506 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521519 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521749 4979 scope.go:117] "RemoveContainer" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.525335 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.525345 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerDied","Data":"3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.532435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerDied","Data":"78ea57414491f2323050c139427e26db676dbcbe77ee157ba12f1a06c2d26416"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.532567 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555787 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555825 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555836 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555847 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555859 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555869 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555878 4979 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555888 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.561020 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.565316 4979 scope.go:117] "RemoveContainer" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.567267 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.570273 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data" (OuterVolumeSpecName: "config-data") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.595542 4979 scope.go:117] "RemoveContainer" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.600831 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.618699 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.627318 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.628826 4979 scope.go:117] "RemoveContainer" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.638684 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.659797 4979 scope.go:117] "RemoveContainer" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.660522 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed\": container with ID starting with 93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed not found: ID does not exist" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660570 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed"} err="failed to get container status \"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed\": rpc error: code = NotFound desc = could not find container \"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed\": container with ID starting with 93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660606 4979 scope.go:117] "RemoveContainer" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.660896 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511\": container with ID starting with fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511 not found: ID does not exist" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660930 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511"} err="failed to get container status \"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511\": rpc error: code = NotFound desc = could not find container \"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511\": container with ID starting with fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511 not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660942 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660951 4979 scope.go:117] "RemoveContainer" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.662660 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c\": container with ID starting with b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c not found: ID does not exist" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.662710 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c"} err="failed to get container status \"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c\": rpc error: code = NotFound desc = could not find container \"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c\": container with ID starting with b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.662753 4979 scope.go:117] "RemoveContainer" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.663258 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267\": container with ID starting with 5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267 not found: ID does not exist" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.663338 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267"} err="failed to get container status \"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267\": rpc error: code = NotFound desc = could not find container \"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267\": container with ID starting with 5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267 not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.663390 4979 scope.go:117] "RemoveContainer" containerID="dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.694489 4979 scope.go:117] "RemoveContainer" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.699995 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.700012 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.700692 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.701069 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.701113 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.702698 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.716170 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.716249 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.745384 4979 scope.go:117] "RemoveContainer" containerID="c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.862288 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.874991 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.080199 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103e7f4c-fbf4-471c-9e8f-dbb281d59de1" path="/var/lib/kubelet/pods/103e7f4c-fbf4-471c-9e8f-dbb281d59de1/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.080647 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" path="/var/lib/kubelet/pods/2f627a1e-42e6-4af6-90f1-750c01bcf076/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.081318 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" path="/var/lib/kubelet/pods/3b34adef-df84-42dd-a052-5e543c4182b5/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.084164 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" path="/var/lib/kubelet/pods/5e0b30c9-4972-4476-90e8-eec8d5d44ce5/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.085797 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" path="/var/lib/kubelet/pods/6795c6d5-6bb8-432f-b7ca-f29f33298093/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.086466 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" path="/var/lib/kubelet/pods/93c29874-a63d-4d35-a1a6-256d811ac6f8/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.087690 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" path="/var/lib/kubelet/pods/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.089020 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba12ac60-82de-4c7b-9411-4f36b0aedf3b" path="/var/lib/kubelet/pods/ba12ac60-82de-4c7b-9411-4f36b0aedf3b/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.089334 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" path="/var/lib/kubelet/pods/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.090555 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" path="/var/lib/kubelet/pods/e28a1e34-b97c-4090-adf8-fa3e2b766365/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.091224 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" path="/var/lib/kubelet/pods/e7cc7cf6-3592-4e25-9578-27ae56d6909b/volumes" Jan 30 22:06:48 crc kubenswrapper[4979]: I0130 22:06:48.599350 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: i/o timeout" Jan 30 22:06:48 crc kubenswrapper[4979]: I0130 22:06:48.670077 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: i/o timeout" Jan 30 22:06:51 crc kubenswrapper[4979]: I0130 22:06:51.070358 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.070916 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.699183 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.699708 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.700243 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.701025 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.701071 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.701901 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.710634 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.710736 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.698944 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.699906 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.700238 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.700349 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.700668 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.702193 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.708208 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.708296 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.699180 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.700465 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.700983 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.701090 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.701098 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.702674 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.704095 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.704146 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:04 crc kubenswrapper[4979]: I0130 22:07:04.070691 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:04 crc kubenswrapper[4979]: E0130 22:07:04.070978 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:04 crc kubenswrapper[4979]: I0130 22:07:04.771287 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3" exitCode=137 Jan 30 22:07:04 crc kubenswrapper[4979]: I0130 22:07:04.771517 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.427112 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.521515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522047 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522153 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522212 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522245 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522293 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.524191 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache" (OuterVolumeSpecName: "cache") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.524887 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock" (OuterVolumeSpecName: "lock") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.536893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk" (OuterVolumeSpecName: "kube-api-access-28trk") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "kube-api-access-28trk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.536966 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.540888 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624853 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624896 4979 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624910 4979 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624950 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624969 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.643850 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.648208 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmjt2_6ed4b9c3-3a9b-4c60-a68b-046cf5288e88/ovs-vswitchd/0.log" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.649772 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.727098 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.790022 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmjt2_6ed4b9c3-3a9b-4c60-a68b-046cf5288e88/ovs-vswitchd/0.log" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791179 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" exitCode=137 Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791340 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"af076ee56d5886e64a296e55b03b5bb0ded8de489a95899c61270dac099f1dfe"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791246 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791383 4979 scope.go:117] "RemoveContainer" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.803167 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"b5f19eb16c0b9ad8d89d2db8aaef61e8a41afec6d53e30023f1498d447572ee3"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.803258 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.815675 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.819413 4979 scope.go:117] "RemoveContainer" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.828923 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.828984 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829046 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829116 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829335 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib" (OuterVolumeSpecName: "var-lib") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829351 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log" (OuterVolumeSpecName: "var-log") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829594 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run" (OuterVolumeSpecName: "var-run") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829948 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830024 4979 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830111 4979 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830179 4979 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830240 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.846181 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm" (OuterVolumeSpecName: "kube-api-access-wkgwm") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "kube-api-access-wkgwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.848006 4979 scope.go:117] "RemoveContainer" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.849328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts" (OuterVolumeSpecName: "scripts") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.893973 4979 scope.go:117] "RemoveContainer" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" Jan 30 22:07:05 crc kubenswrapper[4979]: E0130 22:07:05.894547 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb\": container with ID starting with ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb not found: ID does not exist" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.894599 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb"} err="failed to get container status \"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb\": rpc error: code = NotFound desc = could not find container \"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb\": container with ID starting with ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb not found: ID does not exist" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.894629 4979 scope.go:117] "RemoveContainer" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:07:05 crc kubenswrapper[4979]: E0130 22:07:05.894947 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70\": container with ID starting with 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 not found: ID does not exist" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.894994 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70"} err="failed to get container status \"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70\": rpc error: code = NotFound desc = could not find container \"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70\": container with ID starting with 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 not found: ID does not exist" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.895016 4979 scope.go:117] "RemoveContainer" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" Jan 30 22:07:05 crc kubenswrapper[4979]: E0130 22:07:05.895406 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd\": container with ID starting with 515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd not found: ID does not exist" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.895443 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd"} err="failed to get container status \"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd\": rpc error: code = NotFound desc = could not find container \"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd\": container with ID starting with 515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd not found: ID does not exist" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.895460 4979 scope.go:117] "RemoveContainer" containerID="453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.921696 4979 scope.go:117] "RemoveContainer" containerID="91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.932392 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.932428 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.947542 4979 scope.go:117] "RemoveContainer" containerID="7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.984647 4979 scope.go:117] "RemoveContainer" containerID="a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.041824 4979 scope.go:117] "RemoveContainer" containerID="c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.096273 4979 scope.go:117] "RemoveContainer" containerID="7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.128435 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.133122 4979 scope.go:117] "RemoveContainer" containerID="34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.157272 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.164359 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.171368 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.176445 4979 scope.go:117] "RemoveContainer" containerID="fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.197154 4979 scope.go:117] "RemoveContainer" containerID="77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.268293 4979 scope.go:117] "RemoveContainer" containerID="b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.354291 4979 scope.go:117] "RemoveContainer" containerID="20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.440561 4979 scope.go:117] "RemoveContainer" containerID="1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.468396 4979 scope.go:117] "RemoveContainer" containerID="42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.487620 4979 scope.go:117] "RemoveContainer" containerID="1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.505365 4979 scope.go:117] "RemoveContainer" containerID="9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d" Jan 30 22:07:07 crc kubenswrapper[4979]: I0130 22:07:07.085813 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" path="/var/lib/kubelet/pods/3258ad4a-d940-41c3-b875-afadfcc317d4/volumes" Jan 30 22:07:07 crc kubenswrapper[4979]: I0130 22:07:07.101417 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" path="/var/lib/kubelet/pods/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88/volumes" Jan 30 22:07:15 crc kubenswrapper[4979]: I0130 22:07:15.074423 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:15 crc kubenswrapper[4979]: E0130 22:07:15.077269 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.146976 4979 scope.go:117] "RemoveContainer" containerID="5b349812d2a4fb80dba197720305dc0e90cd12df7c5b2836dc61787bdf46e880" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.192622 4979 scope.go:117] "RemoveContainer" containerID="d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.269839 4979 scope.go:117] "RemoveContainer" containerID="a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.317103 4979 scope.go:117] "RemoveContainer" containerID="0d4dc8128d54521f9ca5effeeca0076315899d8799e67ef62bddd57c385893e0" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.344237 4979 scope.go:117] "RemoveContainer" containerID="80e1c8de2f5d2def08241e9e838d6caa9d9317d6bfc0e4390d83af93615634c1" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.396550 4979 scope.go:117] "RemoveContainer" containerID="11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.423896 4979 scope.go:117] "RemoveContainer" containerID="ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.458091 4979 scope.go:117] "RemoveContainer" containerID="c2e6fa2e1a73e8bf62b5ee3edf154e0d34b174fdf34335916ed3037f6db0258e" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.482783 4979 scope.go:117] "RemoveContainer" containerID="20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.508214 4979 scope.go:117] "RemoveContainer" containerID="92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.539273 4979 scope.go:117] "RemoveContainer" containerID="e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.571831 4979 scope.go:117] "RemoveContainer" containerID="32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.608246 4979 scope.go:117] "RemoveContainer" containerID="34481ae8a2678ceccfab661611d1800a7d06957c7a2f8615105c54e98d7da90e" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.629691 4979 scope.go:117] "RemoveContainer" containerID="936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.022347 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023193 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023218 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023241 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023251 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023264 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023272 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023283 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023291 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023305 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023312 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023325 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023334 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023347 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023357 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023375 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023392 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023400 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023414 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023423 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023433 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023442 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023455 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023463 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023476 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023484 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023498 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023506 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023517 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023525 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023535 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023542 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023557 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023565 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023577 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023586 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023596 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023604 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023613 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023620 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023637 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023646 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023657 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023665 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023676 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023684 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023694 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023701 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023710 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023719 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023734 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023742 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023752 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023760 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023771 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023779 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023793 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023801 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023816 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023824 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023835 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023844 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023853 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="mysql-bootstrap" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023860 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="mysql-bootstrap" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023871 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023879 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023890 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023898 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023908 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023916 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023928 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023935 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023945 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023952 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023962 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023969 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023979 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023986 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024005 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024069 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024079 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024092 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024100 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024114 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024135 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024146 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024164 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024180 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024196 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024206 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024223 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server-init" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024231 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server-init" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024242 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024263 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024271 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024307 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024315 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024330 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024339 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024353 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024360 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024374 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024382 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024552 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024570 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024584 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024593 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024605 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024617 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024629 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024641 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024651 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024662 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024672 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024684 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024692 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024703 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024716 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024725 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024735 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024747 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024757 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024770 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024780 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024788 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024796 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024808 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024820 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024830 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024843 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024852 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024864 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024874 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024883 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024896 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024908 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024916 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024929 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024943 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024953 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024963 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024972 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024983 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025010 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025019 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025050 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025061 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025074 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025087 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025097 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025110 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025122 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.026432 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.039636 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.040329 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.040428 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.040525 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.141584 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.141699 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.141784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.142422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.142773 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.169412 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.394560 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.855688 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:25 crc kubenswrapper[4979]: I0130 22:07:25.040225 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerStarted","Data":"5f7fa040114608d9bdc5d9422bb6af3a7ca2682adece58a5958352444d4ed476"} Jan 30 22:07:26 crc kubenswrapper[4979]: I0130 22:07:26.051716 4979 generic.go:334] "Generic (PLEG): container finished" podID="801732a2-f62f-4aae-93f9-3aef631c9440" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" exitCode=0 Jan 30 22:07:26 crc kubenswrapper[4979]: I0130 22:07:26.051841 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2"} Jan 30 22:07:28 crc kubenswrapper[4979]: I0130 22:07:28.069329 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:28 crc kubenswrapper[4979]: I0130 22:07:28.069658 4979 generic.go:334] "Generic (PLEG): container finished" podID="801732a2-f62f-4aae-93f9-3aef631c9440" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" exitCode=0 Jan 30 22:07:28 crc kubenswrapper[4979]: I0130 22:07:28.069709 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48"} Jan 30 22:07:28 crc kubenswrapper[4979]: E0130 22:07:28.069885 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:29 crc kubenswrapper[4979]: I0130 22:07:29.080778 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerStarted","Data":"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4"} Jan 30 22:07:29 crc kubenswrapper[4979]: I0130 22:07:29.101174 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28h5z" podStartSLOduration=2.586897568 podStartE2EDuration="5.101153563s" podCreationTimestamp="2026-01-30 22:07:24 +0000 UTC" firstStartedPulling="2026-01-30 22:07:26.053787793 +0000 UTC m=+1642.015034826" lastFinishedPulling="2026-01-30 22:07:28.568043788 +0000 UTC m=+1644.529290821" observedRunningTime="2026-01-30 22:07:29.097953037 +0000 UTC m=+1645.059200070" watchObservedRunningTime="2026-01-30 22:07:29.101153563 +0000 UTC m=+1645.062400616" Jan 30 22:07:34 crc kubenswrapper[4979]: I0130 22:07:34.395249 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:34 crc kubenswrapper[4979]: I0130 22:07:34.395749 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:34 crc kubenswrapper[4979]: I0130 22:07:34.438152 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:35 crc kubenswrapper[4979]: I0130 22:07:35.171733 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:35 crc kubenswrapper[4979]: I0130 22:07:35.218683 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.154383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28h5z" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" containerID="cri-o://13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" gracePeriod=2 Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.547828 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.607606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"801732a2-f62f-4aae-93f9-3aef631c9440\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.607691 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"801732a2-f62f-4aae-93f9-3aef631c9440\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.607728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"801732a2-f62f-4aae-93f9-3aef631c9440\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.608808 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities" (OuterVolumeSpecName: "utilities") pod "801732a2-f62f-4aae-93f9-3aef631c9440" (UID: "801732a2-f62f-4aae-93f9-3aef631c9440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.614761 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg" (OuterVolumeSpecName: "kube-api-access-8vxwg") pod "801732a2-f62f-4aae-93f9-3aef631c9440" (UID: "801732a2-f62f-4aae-93f9-3aef631c9440"). InnerVolumeSpecName "kube-api-access-8vxwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.644411 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "801732a2-f62f-4aae-93f9-3aef631c9440" (UID: "801732a2-f62f-4aae-93f9-3aef631c9440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.709909 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.709967 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.709982 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165535 4979 generic.go:334] "Generic (PLEG): container finished" podID="801732a2-f62f-4aae-93f9-3aef631c9440" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" exitCode=0 Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165603 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4"} Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"5f7fa040114608d9bdc5d9422bb6af3a7ca2682adece58a5958352444d4ed476"} Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165666 4979 scope.go:117] "RemoveContainer" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165862 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.185570 4979 scope.go:117] "RemoveContainer" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.196682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.202504 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.223266 4979 scope.go:117] "RemoveContainer" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.243748 4979 scope.go:117] "RemoveContainer" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" Jan 30 22:07:38 crc kubenswrapper[4979]: E0130 22:07:38.244288 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4\": container with ID starting with 13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4 not found: ID does not exist" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.244918 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4"} err="failed to get container status \"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4\": rpc error: code = NotFound desc = could not find container \"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4\": container with ID starting with 13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4 not found: ID does not exist" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.245020 4979 scope.go:117] "RemoveContainer" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" Jan 30 22:07:38 crc kubenswrapper[4979]: E0130 22:07:38.245739 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48\": container with ID starting with 0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48 not found: ID does not exist" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.245783 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48"} err="failed to get container status \"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48\": rpc error: code = NotFound desc = could not find container \"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48\": container with ID starting with 0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48 not found: ID does not exist" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.245810 4979 scope.go:117] "RemoveContainer" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" Jan 30 22:07:38 crc kubenswrapper[4979]: E0130 22:07:38.246503 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2\": container with ID starting with 38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2 not found: ID does not exist" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.246529 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2"} err="failed to get container status \"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2\": rpc error: code = NotFound desc = could not find container \"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2\": container with ID starting with 38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2 not found: ID does not exist" Jan 30 22:07:39 crc kubenswrapper[4979]: I0130 22:07:39.079065 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" path="/var/lib/kubelet/pods/801732a2-f62f-4aae-93f9-3aef631c9440/volumes" Jan 30 22:07:41 crc kubenswrapper[4979]: I0130 22:07:41.070189 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:41 crc kubenswrapper[4979]: E0130 22:07:41.070495 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:52 crc kubenswrapper[4979]: I0130 22:07:52.070505 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:52 crc kubenswrapper[4979]: E0130 22:07:52.071343 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:04 crc kubenswrapper[4979]: I0130 22:08:04.069731 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:04 crc kubenswrapper[4979]: E0130 22:08:04.070758 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:15 crc kubenswrapper[4979]: I0130 22:08:15.073623 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:15 crc kubenswrapper[4979]: E0130 22:08:15.074432 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.751162 4979 scope.go:117] "RemoveContainer" containerID="e944b74595e093897d5163f1d6f5e2841d79cfe7a27b236506370f93704312ba" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.797913 4979 scope.go:117] "RemoveContainer" containerID="2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.844258 4979 scope.go:117] "RemoveContainer" containerID="d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.890789 4979 scope.go:117] "RemoveContainer" containerID="79ca49dab9783f66a2ceb714d9fa0a2f61e36e1771efaec7c095de2ed5249a25" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.916318 4979 scope.go:117] "RemoveContainer" containerID="bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.939151 4979 scope.go:117] "RemoveContainer" containerID="11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.962246 4979 scope.go:117] "RemoveContainer" containerID="f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.994900 4979 scope.go:117] "RemoveContainer" containerID="60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.017969 4979 scope.go:117] "RemoveContainer" containerID="e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.046337 4979 scope.go:117] "RemoveContainer" containerID="8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.073853 4979 scope.go:117] "RemoveContainer" containerID="3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.094430 4979 scope.go:117] "RemoveContainer" containerID="046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.116541 4979 scope.go:117] "RemoveContainer" containerID="aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805" Jan 30 22:08:30 crc kubenswrapper[4979]: I0130 22:08:30.069392 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:30 crc kubenswrapper[4979]: E0130 22:08:30.070302 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:43 crc kubenswrapper[4979]: I0130 22:08:43.070458 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:43 crc kubenswrapper[4979]: E0130 22:08:43.071489 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:57 crc kubenswrapper[4979]: I0130 22:08:57.070571 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:57 crc kubenswrapper[4979]: E0130 22:08:57.071647 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:10 crc kubenswrapper[4979]: I0130 22:09:10.070231 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:10 crc kubenswrapper[4979]: E0130 22:09:10.071382 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.346390 4979 scope.go:117] "RemoveContainer" containerID="68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.380400 4979 scope.go:117] "RemoveContainer" containerID="edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.398581 4979 scope.go:117] "RemoveContainer" containerID="d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.412582 4979 scope.go:117] "RemoveContainer" containerID="009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.448373 4979 scope.go:117] "RemoveContainer" containerID="240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.475629 4979 scope.go:117] "RemoveContainer" containerID="b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.496844 4979 scope.go:117] "RemoveContainer" containerID="87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.521024 4979 scope.go:117] "RemoveContainer" containerID="4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.544228 4979 scope.go:117] "RemoveContainer" containerID="9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.566659 4979 scope.go:117] "RemoveContainer" containerID="db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.583330 4979 scope.go:117] "RemoveContainer" containerID="7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89" Jan 30 22:09:24 crc kubenswrapper[4979]: I0130 22:09:24.069844 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:24 crc kubenswrapper[4979]: E0130 22:09:24.070253 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:35 crc kubenswrapper[4979]: I0130 22:09:35.073999 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:35 crc kubenswrapper[4979]: E0130 22:09:35.074933 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:49 crc kubenswrapper[4979]: I0130 22:09:49.070337 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:49 crc kubenswrapper[4979]: E0130 22:09:49.071404 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:04 crc kubenswrapper[4979]: I0130 22:10:04.070347 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:04 crc kubenswrapper[4979]: E0130 22:10:04.072739 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:17 crc kubenswrapper[4979]: I0130 22:10:17.070012 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:17 crc kubenswrapper[4979]: E0130 22:10:17.070800 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.736703 4979 scope.go:117] "RemoveContainer" containerID="33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.762492 4979 scope.go:117] "RemoveContainer" containerID="aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.794302 4979 scope.go:117] "RemoveContainer" containerID="3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.815729 4979 scope.go:117] "RemoveContainer" containerID="4346269c3467fb9983ba22a3da499f523fe4b5d9072377bdb3c9eadf809fe8ff" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.857551 4979 scope.go:117] "RemoveContainer" containerID="78e6994e836809eb6c4147c73b39f8c34653cb31054d04a758e600e5a045351d" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.896305 4979 scope.go:117] "RemoveContainer" containerID="65f7df0a5f220ddf8b419657c4d7771409b9a8c3c511a14b07fabfbb8e20fede" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.918737 4979 scope.go:117] "RemoveContainer" containerID="d40ebbabe3d8f2995f627a1ae83a4f0a8052321d11e2329aba49ee99c9ce1294" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.949242 4979 scope.go:117] "RemoveContainer" containerID="ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.004764 4979 scope.go:117] "RemoveContainer" containerID="10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.031331 4979 scope.go:117] "RemoveContainer" containerID="2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.052801 4979 scope.go:117] "RemoveContainer" containerID="d6d25ae31ed5e6d9c7cb7e6adcce8605ff98681415f720f118a7c85b8f2468e0" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.074962 4979 scope.go:117] "RemoveContainer" containerID="ce15c22300306383eb564954b64ad58a13fe8c8c246e3d682e1063ba2ed2a496" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.100329 4979 scope.go:117] "RemoveContainer" containerID="70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7" Jan 30 22:10:29 crc kubenswrapper[4979]: I0130 22:10:29.070268 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:29 crc kubenswrapper[4979]: E0130 22:10:29.072007 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:44 crc kubenswrapper[4979]: I0130 22:10:44.069453 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:44 crc kubenswrapper[4979]: E0130 22:10:44.070240 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:55 crc kubenswrapper[4979]: I0130 22:10:55.074297 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:55 crc kubenswrapper[4979]: E0130 22:10:55.075237 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:11:08 crc kubenswrapper[4979]: I0130 22:11:08.069386 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:11:08 crc kubenswrapper[4979]: I0130 22:11:08.658680 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87"} Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.280615 4979 scope.go:117] "RemoveContainer" containerID="03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e" Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.346146 4979 scope.go:117] "RemoveContainer" containerID="10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755" Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.374072 4979 scope.go:117] "RemoveContainer" containerID="4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a" Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.392970 4979 scope.go:117] "RemoveContainer" containerID="5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.804103 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:07 crc kubenswrapper[4979]: E0130 22:12:07.805310 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-utilities" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805332 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-utilities" Jan 30 22:12:07 crc kubenswrapper[4979]: E0130 22:12:07.805374 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-content" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805384 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-content" Jan 30 22:12:07 crc kubenswrapper[4979]: E0130 22:12:07.805409 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805419 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805638 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.806876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.811457 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.878253 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.878316 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.878529 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.980619 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.980697 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.980758 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.981399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.981480 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:08 crc kubenswrapper[4979]: I0130 22:12:08.004276 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:08 crc kubenswrapper[4979]: I0130 22:12:08.138281 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:08 crc kubenswrapper[4979]: I0130 22:12:08.792219 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.196843 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.203101 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.216793 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.220349 4979 generic.go:334] "Generic (PLEG): container finished" podID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" exitCode=0 Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.220411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4"} Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.220451 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerStarted","Data":"532bfd5cbbcfc092ff84e0e7922718c9662053f5efe4ea007105b76084f9b245"} Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.224025 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.304257 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.304768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.304801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.406427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.406501 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.406528 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.407073 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.407324 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.430505 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.530954 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.049922 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.228353 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3"} Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.228404 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"a88de8b1be80ae7f14d10157dabb9bd7056d8724d1af969bce465123bf79c71b"} Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.234054 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerStarted","Data":"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69"} Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.246728 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5d02742-f26c-416a-a917-03ca6eb81632" containerID="96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3" exitCode=0 Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.246858 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3"} Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.250818 4979 generic.go:334] "Generic (PLEG): container finished" podID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" exitCode=0 Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.250880 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69"} Jan 30 22:12:12 crc kubenswrapper[4979]: I0130 22:12:12.262124 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerStarted","Data":"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f"} Jan 30 22:12:12 crc kubenswrapper[4979]: I0130 22:12:12.264441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9"} Jan 30 22:12:12 crc kubenswrapper[4979]: I0130 22:12:12.292673 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fgn85" podStartSLOduration=2.7385316939999997 podStartE2EDuration="5.292643604s" podCreationTimestamp="2026-01-30 22:12:07 +0000 UTC" firstStartedPulling="2026-01-30 22:12:09.223677242 +0000 UTC m=+1925.184924275" lastFinishedPulling="2026-01-30 22:12:11.777789152 +0000 UTC m=+1927.739036185" observedRunningTime="2026-01-30 22:12:12.284764812 +0000 UTC m=+1928.246011875" watchObservedRunningTime="2026-01-30 22:12:12.292643604 +0000 UTC m=+1928.253890647" Jan 30 22:12:13 crc kubenswrapper[4979]: I0130 22:12:13.274810 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5d02742-f26c-416a-a917-03ca6eb81632" containerID="60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9" exitCode=0 Jan 30 22:12:13 crc kubenswrapper[4979]: I0130 22:12:13.274909 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9"} Jan 30 22:12:14 crc kubenswrapper[4979]: I0130 22:12:14.286581 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9"} Jan 30 22:12:14 crc kubenswrapper[4979]: I0130 22:12:14.308900 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-trp49" podStartSLOduration=2.8902174929999997 podStartE2EDuration="5.308871309s" podCreationTimestamp="2026-01-30 22:12:09 +0000 UTC" firstStartedPulling="2026-01-30 22:12:11.249819906 +0000 UTC m=+1927.211066969" lastFinishedPulling="2026-01-30 22:12:13.668473752 +0000 UTC m=+1929.629720785" observedRunningTime="2026-01-30 22:12:14.303840584 +0000 UTC m=+1930.265087617" watchObservedRunningTime="2026-01-30 22:12:14.308871309 +0000 UTC m=+1930.270118352" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.138673 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.139153 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.191840 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.360474 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.981287 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:19 crc kubenswrapper[4979]: I0130 22:12:19.531460 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:19 crc kubenswrapper[4979]: I0130 22:12:19.532240 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:19 crc kubenswrapper[4979]: I0130 22:12:19.580639 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.340281 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fgn85" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" containerID="cri-o://f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" gracePeriod=2 Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.401788 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.733633 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.796190 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.796312 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.796356 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.798021 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities" (OuterVolumeSpecName: "utilities") pod "c48a1a8d-b1ab-431a-87c6-0cba912c20e7" (UID: "c48a1a8d-b1ab-431a-87c6-0cba912c20e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.831437 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7" (OuterVolumeSpecName: "kube-api-access-vslm7") pod "c48a1a8d-b1ab-431a-87c6-0cba912c20e7" (UID: "c48a1a8d-b1ab-431a-87c6-0cba912c20e7"). InnerVolumeSpecName "kube-api-access-vslm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.897668 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.897795 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.147646 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c48a1a8d-b1ab-431a-87c6-0cba912c20e7" (UID: "c48a1a8d-b1ab-431a-87c6-0cba912c20e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.204936 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.347857 4979 generic.go:334] "Generic (PLEG): container finished" podID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" exitCode=0 Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.347927 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.347975 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f"} Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.348101 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"532bfd5cbbcfc092ff84e0e7922718c9662053f5efe4ea007105b76084f9b245"} Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.348126 4979 scope.go:117] "RemoveContainer" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.368114 4979 scope.go:117] "RemoveContainer" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.398344 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.401443 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.403392 4979 scope.go:117] "RemoveContainer" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.428448 4979 scope.go:117] "RemoveContainer" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" Jan 30 22:12:21 crc kubenswrapper[4979]: E0130 22:12:21.429383 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f\": container with ID starting with f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f not found: ID does not exist" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.429454 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f"} err="failed to get container status \"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f\": rpc error: code = NotFound desc = could not find container \"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f\": container with ID starting with f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f not found: ID does not exist" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.429491 4979 scope.go:117] "RemoveContainer" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" Jan 30 22:12:21 crc kubenswrapper[4979]: E0130 22:12:21.430059 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69\": container with ID starting with 2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69 not found: ID does not exist" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.430119 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69"} err="failed to get container status \"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69\": rpc error: code = NotFound desc = could not find container \"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69\": container with ID starting with 2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69 not found: ID does not exist" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.430153 4979 scope.go:117] "RemoveContainer" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" Jan 30 22:12:21 crc kubenswrapper[4979]: E0130 22:12:21.430630 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4\": container with ID starting with 0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4 not found: ID does not exist" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.430686 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4"} err="failed to get container status \"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4\": rpc error: code = NotFound desc = could not find container \"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4\": container with ID starting with 0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4 not found: ID does not exist" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.983323 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:22 crc kubenswrapper[4979]: I0130 22:12:22.358118 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-trp49" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" containerID="cri-o://197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9" gracePeriod=2 Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.078879 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" path="/var/lib/kubelet/pods/c48a1a8d-b1ab-431a-87c6-0cba912c20e7/volumes" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.370254 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5d02742-f26c-416a-a917-03ca6eb81632" containerID="197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9" exitCode=0 Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.370339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9"} Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.439662 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.641401 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"b5d02742-f26c-416a-a917-03ca6eb81632\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642056 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"b5d02742-f26c-416a-a917-03ca6eb81632\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642187 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"b5d02742-f26c-416a-a917-03ca6eb81632\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities" (OuterVolumeSpecName: "utilities") pod "b5d02742-f26c-416a-a917-03ca6eb81632" (UID: "b5d02742-f26c-416a-a917-03ca6eb81632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642455 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.649433 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7" (OuterVolumeSpecName: "kube-api-access-g7qv7") pod "b5d02742-f26c-416a-a917-03ca6eb81632" (UID: "b5d02742-f26c-416a-a917-03ca6eb81632"). InnerVolumeSpecName "kube-api-access-g7qv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.697892 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5d02742-f26c-416a-a917-03ca6eb81632" (UID: "b5d02742-f26c-416a-a917-03ca6eb81632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.744868 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.744919 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.383391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"a88de8b1be80ae7f14d10157dabb9bd7056d8724d1af969bce465123bf79c71b"} Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.383482 4979 scope.go:117] "RemoveContainer" containerID="197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.383520 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.425760 4979 scope.go:117] "RemoveContainer" containerID="60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.434019 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.443265 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.451690 4979 scope.go:117] "RemoveContainer" containerID="96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.514670 4979 scope.go:117] "RemoveContainer" containerID="2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.537632 4979 scope.go:117] "RemoveContainer" containerID="748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.555628 4979 scope.go:117] "RemoveContainer" containerID="273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.597421 4979 scope.go:117] "RemoveContainer" containerID="99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.617675 4979 scope.go:117] "RemoveContainer" containerID="5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504" Jan 30 22:12:25 crc kubenswrapper[4979]: I0130 22:12:25.078921 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" path="/var/lib/kubelet/pods/b5d02742-f26c-416a-a917-03ca6eb81632/volumes" Jan 30 22:13:32 crc kubenswrapper[4979]: I0130 22:13:32.039704 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:13:32 crc kubenswrapper[4979]: I0130 22:13:32.040359 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:14:02 crc kubenswrapper[4979]: I0130 22:14:02.039929 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:14:02 crc kubenswrapper[4979]: I0130 22:14:02.041156 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.040313 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041027 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041093 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041700 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041750 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87" gracePeriod=600 Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421272 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87" exitCode=0 Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421398 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87"} Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421817 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467"} Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421857 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.149843 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150891 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150908 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150927 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150936 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150959 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150967 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150980 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150988 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.151002 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151010 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.151023 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151046 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151206 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151225 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151873 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.154218 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.159752 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.162851 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.258316 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.258405 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.258481 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.359335 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.359409 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.359450 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.360385 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.370044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.380115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.488000 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.922107 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 22:15:01 crc kubenswrapper[4979]: I0130 22:15:01.652810 4979 generic.go:334] "Generic (PLEG): container finished" podID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerID="3bbe88baa1620c36ba12ba04d5a8542170b476b0b0988530b1848eeba6a89780" exitCode=0 Jan 30 22:15:01 crc kubenswrapper[4979]: I0130 22:15:01.652903 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" event={"ID":"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8","Type":"ContainerDied","Data":"3bbe88baa1620c36ba12ba04d5a8542170b476b0b0988530b1848eeba6a89780"} Jan 30 22:15:01 crc kubenswrapper[4979]: I0130 22:15:01.654123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" event={"ID":"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8","Type":"ContainerStarted","Data":"d2a08b9f9fb63d024bde97d062854de25051a1d3114221d7de424c9d5a44ccb3"} Jan 30 22:15:02 crc kubenswrapper[4979]: I0130 22:15:02.939015 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.004810 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.004928 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.005154 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.005814 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" (UID: "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.013405 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8" (OuterVolumeSpecName: "kube-api-access-b4pt8") pod "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" (UID: "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8"). InnerVolumeSpecName "kube-api-access-b4pt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.014396 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" (UID: "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.106265 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.106308 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.106320 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.670705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" event={"ID":"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8","Type":"ContainerDied","Data":"d2a08b9f9fb63d024bde97d062854de25051a1d3114221d7de424c9d5a44ccb3"} Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.671237 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a08b9f9fb63d024bde97d062854de25051a1d3114221d7de424c9d5a44ccb3" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.670845 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:04 crc kubenswrapper[4979]: I0130 22:15:04.045680 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 22:15:04 crc kubenswrapper[4979]: I0130 22:15:04.054280 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 22:15:05 crc kubenswrapper[4979]: I0130 22:15:05.089193 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" path="/var/lib/kubelet/pods/b43f94f0-791b-49cc-afe0-95ec18aa1f07/volumes" Jan 30 22:15:24 crc kubenswrapper[4979]: I0130 22:15:24.744820 4979 scope.go:117] "RemoveContainer" containerID="72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727" Jan 30 22:16:32 crc kubenswrapper[4979]: I0130 22:16:32.040067 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:16:32 crc kubenswrapper[4979]: I0130 22:16:32.040914 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.283155 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:16:55 crc kubenswrapper[4979]: E0130 22:16:55.284163 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerName="collect-profiles" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.284177 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerName="collect-profiles" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.284374 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerName="collect-profiles" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.285663 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.294150 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.403838 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.403919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.404103 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505273 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505305 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505907 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505982 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.526062 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.619092 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.104060 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.902747 4979 generic.go:334] "Generic (PLEG): container finished" podID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" exitCode=0 Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.902841 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c"} Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.903161 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerStarted","Data":"b41ac97e91ce5fa78d2960a00ec85ea0838948d02e01ae651778acba106c8d44"} Jan 30 22:16:57 crc kubenswrapper[4979]: I0130 22:16:57.918315 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerStarted","Data":"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e"} Jan 30 22:16:58 crc kubenswrapper[4979]: I0130 22:16:58.966537 4979 generic.go:334] "Generic (PLEG): container finished" podID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" exitCode=0 Jan 30 22:16:58 crc kubenswrapper[4979]: I0130 22:16:58.966836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e"} Jan 30 22:16:59 crc kubenswrapper[4979]: I0130 22:16:59.977977 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerStarted","Data":"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23"} Jan 30 22:16:59 crc kubenswrapper[4979]: I0130 22:16:59.998189 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r64t2" podStartSLOduration=2.502903448 podStartE2EDuration="4.998169829s" podCreationTimestamp="2026-01-30 22:16:55 +0000 UTC" firstStartedPulling="2026-01-30 22:16:56.905598359 +0000 UTC m=+2212.866845382" lastFinishedPulling="2026-01-30 22:16:59.40086473 +0000 UTC m=+2215.362111763" observedRunningTime="2026-01-30 22:16:59.995952019 +0000 UTC m=+2215.957199052" watchObservedRunningTime="2026-01-30 22:16:59.998169829 +0000 UTC m=+2215.959416862" Jan 30 22:17:02 crc kubenswrapper[4979]: I0130 22:17:02.039898 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:17:02 crc kubenswrapper[4979]: I0130 22:17:02.040085 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:17:05 crc kubenswrapper[4979]: I0130 22:17:05.620223 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:05 crc kubenswrapper[4979]: I0130 22:17:05.620509 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:05 crc kubenswrapper[4979]: I0130 22:17:05.662899 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:06 crc kubenswrapper[4979]: I0130 22:17:06.076754 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:06 crc kubenswrapper[4979]: I0130 22:17:06.123512 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:17:08 crc kubenswrapper[4979]: I0130 22:17:08.045922 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r64t2" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" containerID="cri-o://b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" gracePeriod=2 Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.522723 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.722308 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"5488cdca-2b6c-4fa2-bd28-103b7babd258\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.722479 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"5488cdca-2b6c-4fa2-bd28-103b7babd258\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.722556 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"5488cdca-2b6c-4fa2-bd28-103b7babd258\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.723536 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities" (OuterVolumeSpecName: "utilities") pod "5488cdca-2b6c-4fa2-bd28-103b7babd258" (UID: "5488cdca-2b6c-4fa2-bd28-103b7babd258"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.729121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5" (OuterVolumeSpecName: "kube-api-access-4gfc5") pod "5488cdca-2b6c-4fa2-bd28-103b7babd258" (UID: "5488cdca-2b6c-4fa2-bd28-103b7babd258"). InnerVolumeSpecName "kube-api-access-4gfc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.823864 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") on node \"crc\" DevicePath \"\"" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.823901 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.852226 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5488cdca-2b6c-4fa2-bd28-103b7babd258" (UID: "5488cdca-2b6c-4fa2-bd28-103b7babd258"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.925267 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061565 4979 generic.go:334] "Generic (PLEG): container finished" podID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" exitCode=0 Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23"} Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061661 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"b41ac97e91ce5fa78d2960a00ec85ea0838948d02e01ae651778acba106c8d44"} Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061657 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061730 4979 scope.go:117] "RemoveContainer" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.082126 4979 scope.go:117] "RemoveContainer" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.107523 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.114799 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.117963 4979 scope.go:117] "RemoveContainer" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.140154 4979 scope.go:117] "RemoveContainer" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" Jan 30 22:17:10 crc kubenswrapper[4979]: E0130 22:17:10.141295 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23\": container with ID starting with b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23 not found: ID does not exist" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.141341 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23"} err="failed to get container status \"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23\": rpc error: code = NotFound desc = could not find container \"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23\": container with ID starting with b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23 not found: ID does not exist" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.141376 4979 scope.go:117] "RemoveContainer" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" Jan 30 22:17:10 crc kubenswrapper[4979]: E0130 22:17:10.141981 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e\": container with ID starting with ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e not found: ID does not exist" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.142111 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e"} err="failed to get container status \"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e\": rpc error: code = NotFound desc = could not find container \"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e\": container with ID starting with ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e not found: ID does not exist" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.142189 4979 scope.go:117] "RemoveContainer" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" Jan 30 22:17:10 crc kubenswrapper[4979]: E0130 22:17:10.142696 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c\": container with ID starting with eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c not found: ID does not exist" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.142729 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c"} err="failed to get container status \"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c\": rpc error: code = NotFound desc = could not find container \"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c\": container with ID starting with eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c not found: ID does not exist" Jan 30 22:17:11 crc kubenswrapper[4979]: I0130 22:17:11.078722 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" path="/var/lib/kubelet/pods/5488cdca-2b6c-4fa2-bd28-103b7babd258/volumes" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.039875 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.040664 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.040722 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.041499 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.041555 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" gracePeriod=600 Jan 30 22:17:32 crc kubenswrapper[4979]: E0130 22:17:32.160281 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.216346 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" exitCode=0 Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.216477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467"} Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.216565 4979 scope.go:117] "RemoveContainer" containerID="5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.217227 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:17:32 crc kubenswrapper[4979]: E0130 22:17:32.217580 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:17:45 crc kubenswrapper[4979]: I0130 22:17:45.074397 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:17:45 crc kubenswrapper[4979]: E0130 22:17:45.075292 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:00 crc kubenswrapper[4979]: I0130 22:18:00.070728 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:00 crc kubenswrapper[4979]: E0130 22:18:00.071905 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:11 crc kubenswrapper[4979]: I0130 22:18:11.070666 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:11 crc kubenswrapper[4979]: E0130 22:18:11.072324 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:23 crc kubenswrapper[4979]: I0130 22:18:23.069959 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:23 crc kubenswrapper[4979]: E0130 22:18:23.071752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:38 crc kubenswrapper[4979]: I0130 22:18:38.071309 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:38 crc kubenswrapper[4979]: E0130 22:18:38.073644 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.069537 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.070318 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.474775 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.475195 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475215 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.475228 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-utilities" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475236 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-utilities" Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.475257 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-content" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475264 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-content" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475453 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.476654 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.493303 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.602894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.603319 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.603525 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705239 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705375 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705886 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.706460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.729144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.802126 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.271203 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.978150 4979 generic.go:334] "Generic (PLEG): container finished" podID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" exitCode=0 Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.978202 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77"} Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.978232 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerStarted","Data":"b7e2aa97e43eb6946831cc7a88d9f10f88fab647f7641efd18f21e2c664f44d8"} Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.981419 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:18:54 crc kubenswrapper[4979]: I0130 22:18:54.754135 4979 generic.go:334] "Generic (PLEG): container finished" podID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" exitCode=0 Jan 30 22:18:54 crc kubenswrapper[4979]: I0130 22:18:54.754228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64"} Jan 30 22:18:55 crc kubenswrapper[4979]: I0130 22:18:55.767331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerStarted","Data":"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186"} Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.802999 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.803428 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.850427 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.869849 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8258p" podStartSLOduration=7.649781619 podStartE2EDuration="10.86982959s" podCreationTimestamp="2026-01-30 22:18:50 +0000 UTC" firstStartedPulling="2026-01-30 22:18:51.980860661 +0000 UTC m=+2327.942107714" lastFinishedPulling="2026-01-30 22:18:55.200908652 +0000 UTC m=+2331.162155685" observedRunningTime="2026-01-30 22:18:55.793755491 +0000 UTC m=+2331.755002524" watchObservedRunningTime="2026-01-30 22:19:00.86982959 +0000 UTC m=+2336.831076623" Jan 30 22:19:01 crc kubenswrapper[4979]: I0130 22:19:01.848002 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:01 crc kubenswrapper[4979]: I0130 22:19:01.895611 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:19:03 crc kubenswrapper[4979]: I0130 22:19:03.819764 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8258p" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" containerID="cri-o://83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" gracePeriod=2 Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.290326 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.421414 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"0cf5e122-2db4-4c3f-b6db-250788b13137\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.421590 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"0cf5e122-2db4-4c3f-b6db-250788b13137\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.421629 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"0cf5e122-2db4-4c3f-b6db-250788b13137\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.422511 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities" (OuterVolumeSpecName: "utilities") pod "0cf5e122-2db4-4c3f-b6db-250788b13137" (UID: "0cf5e122-2db4-4c3f-b6db-250788b13137"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.427166 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84" (OuterVolumeSpecName: "kube-api-access-5sc84") pod "0cf5e122-2db4-4c3f-b6db-250788b13137" (UID: "0cf5e122-2db4-4c3f-b6db-250788b13137"). InnerVolumeSpecName "kube-api-access-5sc84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.449677 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf5e122-2db4-4c3f-b6db-250788b13137" (UID: "0cf5e122-2db4-4c3f-b6db-250788b13137"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.523173 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.523244 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.523259 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") on node \"crc\" DevicePath \"\"" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828116 4979 generic.go:334] "Generic (PLEG): container finished" podID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" exitCode=0 Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828164 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186"} Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"b7e2aa97e43eb6946831cc7a88d9f10f88fab647f7641efd18f21e2c664f44d8"} Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828255 4979 scope.go:117] "RemoveContainer" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.848395 4979 scope.go:117] "RemoveContainer" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.867620 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.872870 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.888685 4979 scope.go:117] "RemoveContainer" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.903896 4979 scope.go:117] "RemoveContainer" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" Jan 30 22:19:04 crc kubenswrapper[4979]: E0130 22:19:04.904281 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186\": container with ID starting with 83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186 not found: ID does not exist" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904326 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186"} err="failed to get container status \"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186\": rpc error: code = NotFound desc = could not find container \"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186\": container with ID starting with 83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186 not found: ID does not exist" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904354 4979 scope.go:117] "RemoveContainer" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" Jan 30 22:19:04 crc kubenswrapper[4979]: E0130 22:19:04.904591 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64\": container with ID starting with c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64 not found: ID does not exist" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904618 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64"} err="failed to get container status \"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64\": rpc error: code = NotFound desc = could not find container \"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64\": container with ID starting with c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64 not found: ID does not exist" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904634 4979 scope.go:117] "RemoveContainer" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" Jan 30 22:19:04 crc kubenswrapper[4979]: E0130 22:19:04.904902 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77\": container with ID starting with 3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77 not found: ID does not exist" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904926 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77"} err="failed to get container status \"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77\": rpc error: code = NotFound desc = could not find container \"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77\": container with ID starting with 3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77 not found: ID does not exist" Jan 30 22:19:05 crc kubenswrapper[4979]: I0130 22:19:05.073727 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:05 crc kubenswrapper[4979]: E0130 22:19:05.074609 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:05 crc kubenswrapper[4979]: I0130 22:19:05.078702 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" path="/var/lib/kubelet/pods/0cf5e122-2db4-4c3f-b6db-250788b13137/volumes" Jan 30 22:19:18 crc kubenswrapper[4979]: I0130 22:19:18.070531 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:18 crc kubenswrapper[4979]: E0130 22:19:18.071482 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:31 crc kubenswrapper[4979]: I0130 22:19:31.070238 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:31 crc kubenswrapper[4979]: E0130 22:19:31.071313 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:42 crc kubenswrapper[4979]: I0130 22:19:42.069789 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:42 crc kubenswrapper[4979]: E0130 22:19:42.070478 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:55 crc kubenswrapper[4979]: I0130 22:19:55.075263 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:55 crc kubenswrapper[4979]: E0130 22:19:55.076339 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:09 crc kubenswrapper[4979]: I0130 22:20:09.069777 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:09 crc kubenswrapper[4979]: E0130 22:20:09.070592 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:21 crc kubenswrapper[4979]: I0130 22:20:21.069955 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:21 crc kubenswrapper[4979]: E0130 22:20:21.071013 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:32 crc kubenswrapper[4979]: I0130 22:20:32.070277 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:32 crc kubenswrapper[4979]: E0130 22:20:32.071045 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:45 crc kubenswrapper[4979]: I0130 22:20:45.073636 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:45 crc kubenswrapper[4979]: E0130 22:20:45.076305 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:56 crc kubenswrapper[4979]: I0130 22:20:56.069619 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:56 crc kubenswrapper[4979]: E0130 22:20:56.070489 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:10 crc kubenswrapper[4979]: I0130 22:21:10.069844 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:10 crc kubenswrapper[4979]: E0130 22:21:10.070683 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:23 crc kubenswrapper[4979]: I0130 22:21:23.070642 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:23 crc kubenswrapper[4979]: E0130 22:21:23.072093 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:35 crc kubenswrapper[4979]: I0130 22:21:35.074607 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:35 crc kubenswrapper[4979]: E0130 22:21:35.076357 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:47 crc kubenswrapper[4979]: I0130 22:21:47.070392 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:47 crc kubenswrapper[4979]: E0130 22:21:47.071047 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:01 crc kubenswrapper[4979]: I0130 22:22:01.070301 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:01 crc kubenswrapper[4979]: E0130 22:22:01.070982 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.135875 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:12 crc kubenswrapper[4979]: E0130 22:22:12.136849 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-utilities" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.136868 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-utilities" Jan 30 22:22:12 crc kubenswrapper[4979]: E0130 22:22:12.136882 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-content" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.136889 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-content" Jan 30 22:22:12 crc kubenswrapper[4979]: E0130 22:22:12.136921 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.136930 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.137118 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.138420 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.158063 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.288906 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.288983 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.289155 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.390866 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.390953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.390987 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.391552 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.391566 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.420168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.462229 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.990087 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:13 crc kubenswrapper[4979]: I0130 22:22:13.267970 4979 generic.go:334] "Generic (PLEG): container finished" podID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" exitCode=0 Jan 30 22:22:13 crc kubenswrapper[4979]: I0130 22:22:13.268044 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08"} Jan 30 22:22:13 crc kubenswrapper[4979]: I0130 22:22:13.268073 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerStarted","Data":"d781cdde8125c9d069fa1fd3beaffd1896bbc518ab3100d82c8a55ce4a8432fd"} Jan 30 22:22:15 crc kubenswrapper[4979]: I0130 22:22:15.078110 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:15 crc kubenswrapper[4979]: E0130 22:22:15.079393 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:15 crc kubenswrapper[4979]: I0130 22:22:15.288006 4979 generic.go:334] "Generic (PLEG): container finished" podID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" exitCode=0 Jan 30 22:22:15 crc kubenswrapper[4979]: I0130 22:22:15.288107 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8"} Jan 30 22:22:17 crc kubenswrapper[4979]: I0130 22:22:17.309309 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerStarted","Data":"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df"} Jan 30 22:22:18 crc kubenswrapper[4979]: I0130 22:22:18.352965 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b9qnz" podStartSLOduration=2.689777086 podStartE2EDuration="6.352934023s" podCreationTimestamp="2026-01-30 22:22:12 +0000 UTC" firstStartedPulling="2026-01-30 22:22:13.270308996 +0000 UTC m=+2529.231556039" lastFinishedPulling="2026-01-30 22:22:16.933465943 +0000 UTC m=+2532.894712976" observedRunningTime="2026-01-30 22:22:18.349365517 +0000 UTC m=+2534.310612560" watchObservedRunningTime="2026-01-30 22:22:18.352934023 +0000 UTC m=+2534.314181066" Jan 30 22:22:22 crc kubenswrapper[4979]: I0130 22:22:22.463061 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:22 crc kubenswrapper[4979]: I0130 22:22:22.463310 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:22 crc kubenswrapper[4979]: I0130 22:22:22.514300 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:23 crc kubenswrapper[4979]: I0130 22:22:23.395524 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:23 crc kubenswrapper[4979]: I0130 22:22:23.444400 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:25 crc kubenswrapper[4979]: I0130 22:22:25.372125 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b9qnz" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" containerID="cri-o://2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" gracePeriod=2 Jan 30 22:22:26 crc kubenswrapper[4979]: I0130 22:22:26.921259 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.067680 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"04c52e7b-84ba-42ed-8feb-0b762719d029\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.067770 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"04c52e7b-84ba-42ed-8feb-0b762719d029\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.067907 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"04c52e7b-84ba-42ed-8feb-0b762719d029\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.074321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc" (OuterVolumeSpecName: "kube-api-access-bfthc") pod "04c52e7b-84ba-42ed-8feb-0b762719d029" (UID: "04c52e7b-84ba-42ed-8feb-0b762719d029"). InnerVolumeSpecName "kube-api-access-bfthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.077431 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities" (OuterVolumeSpecName: "utilities") pod "04c52e7b-84ba-42ed-8feb-0b762719d029" (UID: "04c52e7b-84ba-42ed-8feb-0b762719d029"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.136172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04c52e7b-84ba-42ed-8feb-0b762719d029" (UID: "04c52e7b-84ba-42ed-8feb-0b762719d029"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.170021 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.170063 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.170072 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") on node \"crc\" DevicePath \"\"" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387254 4979 generic.go:334] "Generic (PLEG): container finished" podID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" exitCode=0 Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387310 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df"} Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387345 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"d781cdde8125c9d069fa1fd3beaffd1896bbc518ab3100d82c8a55ce4a8432fd"} Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387365 4979 scope.go:117] "RemoveContainer" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387521 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.429586 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.434292 4979 scope.go:117] "RemoveContainer" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.437226 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.457857 4979 scope.go:117] "RemoveContainer" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.473968 4979 scope.go:117] "RemoveContainer" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" Jan 30 22:22:27 crc kubenswrapper[4979]: E0130 22:22:27.474570 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df\": container with ID starting with 2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df not found: ID does not exist" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.474605 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df"} err="failed to get container status \"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df\": rpc error: code = NotFound desc = could not find container \"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df\": container with ID starting with 2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df not found: ID does not exist" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.474640 4979 scope.go:117] "RemoveContainer" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" Jan 30 22:22:27 crc kubenswrapper[4979]: E0130 22:22:27.474999 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8\": container with ID starting with bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8 not found: ID does not exist" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.475063 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8"} err="failed to get container status \"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8\": rpc error: code = NotFound desc = could not find container \"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8\": container with ID starting with bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8 not found: ID does not exist" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.475092 4979 scope.go:117] "RemoveContainer" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" Jan 30 22:22:27 crc kubenswrapper[4979]: E0130 22:22:27.475387 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08\": container with ID starting with fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08 not found: ID does not exist" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.475413 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08"} err="failed to get container status \"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08\": rpc error: code = NotFound desc = could not find container \"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08\": container with ID starting with fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08 not found: ID does not exist" Jan 30 22:22:29 crc kubenswrapper[4979]: I0130 22:22:29.069804 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:29 crc kubenswrapper[4979]: E0130 22:22:29.070198 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:29 crc kubenswrapper[4979]: I0130 22:22:29.078639 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" path="/var/lib/kubelet/pods/04c52e7b-84ba-42ed-8feb-0b762719d029/volumes" Jan 30 22:22:40 crc kubenswrapper[4979]: I0130 22:22:40.069893 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:41 crc kubenswrapper[4979]: I0130 22:22:41.491090 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee"} Jan 30 22:23:03 crc kubenswrapper[4979]: I0130 22:23:03.998139 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:04 crc kubenswrapper[4979]: E0130 22:23:03.999144 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-utilities" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999164 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-utilities" Jan 30 22:23:04 crc kubenswrapper[4979]: E0130 22:23:03.999179 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-content" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999189 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-content" Jan 30 22:23:04 crc kubenswrapper[4979]: E0130 22:23:03.999206 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999214 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999392 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.000571 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.020586 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.079118 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.079207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.079248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181349 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181801 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.212254 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.364880 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.902910 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:05 crc kubenswrapper[4979]: I0130 22:23:05.683220 4979 generic.go:334] "Generic (PLEG): container finished" podID="8592764d-c12c-4340-8bc1-a8ac67545450" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" exitCode=0 Jan 30 22:23:05 crc kubenswrapper[4979]: I0130 22:23:05.683426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676"} Jan 30 22:23:05 crc kubenswrapper[4979]: I0130 22:23:05.683725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerStarted","Data":"e10688bf7972c96d2aaf0e41ff4a1887b252366702cb92081eb97dde69ea3dfc"} Jan 30 22:23:06 crc kubenswrapper[4979]: I0130 22:23:06.695472 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerStarted","Data":"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044"} Jan 30 22:23:07 crc kubenswrapper[4979]: I0130 22:23:07.709008 4979 generic.go:334] "Generic (PLEG): container finished" podID="8592764d-c12c-4340-8bc1-a8ac67545450" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" exitCode=0 Jan 30 22:23:07 crc kubenswrapper[4979]: I0130 22:23:07.709094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044"} Jan 30 22:23:08 crc kubenswrapper[4979]: I0130 22:23:08.721558 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerStarted","Data":"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a"} Jan 30 22:23:08 crc kubenswrapper[4979]: I0130 22:23:08.759134 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwrbq" podStartSLOduration=3.313729743 podStartE2EDuration="5.759091746s" podCreationTimestamp="2026-01-30 22:23:03 +0000 UTC" firstStartedPulling="2026-01-30 22:23:05.685189586 +0000 UTC m=+2581.646436619" lastFinishedPulling="2026-01-30 22:23:08.130551579 +0000 UTC m=+2584.091798622" observedRunningTime="2026-01-30 22:23:08.738385655 +0000 UTC m=+2584.699632728" watchObservedRunningTime="2026-01-30 22:23:08.759091746 +0000 UTC m=+2584.720338799" Jan 30 22:23:13 crc kubenswrapper[4979]: E0130 22:23:13.801060 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.365272 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.365329 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.403110 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.839408 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.892442 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:16 crc kubenswrapper[4979]: I0130 22:23:16.803537 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwrbq" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" containerID="cri-o://811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" gracePeriod=2 Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.271108 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.407755 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"8592764d-c12c-4340-8bc1-a8ac67545450\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.408081 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"8592764d-c12c-4340-8bc1-a8ac67545450\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.408147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"8592764d-c12c-4340-8bc1-a8ac67545450\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.409211 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities" (OuterVolumeSpecName: "utilities") pod "8592764d-c12c-4340-8bc1-a8ac67545450" (UID: "8592764d-c12c-4340-8bc1-a8ac67545450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.419295 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r" (OuterVolumeSpecName: "kube-api-access-vqs7r") pod "8592764d-c12c-4340-8bc1-a8ac67545450" (UID: "8592764d-c12c-4340-8bc1-a8ac67545450"). InnerVolumeSpecName "kube-api-access-vqs7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.464381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8592764d-c12c-4340-8bc1-a8ac67545450" (UID: "8592764d-c12c-4340-8bc1-a8ac67545450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.509515 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.509548 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.509561 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.814054 4979 generic.go:334] "Generic (PLEG): container finished" podID="8592764d-c12c-4340-8bc1-a8ac67545450" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" exitCode=0 Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.814178 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.814201 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a"} Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.815609 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"e10688bf7972c96d2aaf0e41ff4a1887b252366702cb92081eb97dde69ea3dfc"} Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.815642 4979 scope.go:117] "RemoveContainer" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.840744 4979 scope.go:117] "RemoveContainer" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.866199 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.867754 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.884151 4979 scope.go:117] "RemoveContainer" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.917599 4979 scope.go:117] "RemoveContainer" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" Jan 30 22:23:17 crc kubenswrapper[4979]: E0130 22:23:17.918159 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a\": container with ID starting with 811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a not found: ID does not exist" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918196 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a"} err="failed to get container status \"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a\": rpc error: code = NotFound desc = could not find container \"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a\": container with ID starting with 811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a not found: ID does not exist" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918231 4979 scope.go:117] "RemoveContainer" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" Jan 30 22:23:17 crc kubenswrapper[4979]: E0130 22:23:17.918741 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044\": container with ID starting with 3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044 not found: ID does not exist" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918772 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044"} err="failed to get container status \"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044\": rpc error: code = NotFound desc = could not find container \"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044\": container with ID starting with 3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044 not found: ID does not exist" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918793 4979 scope.go:117] "RemoveContainer" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" Jan 30 22:23:17 crc kubenswrapper[4979]: E0130 22:23:17.919107 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676\": container with ID starting with 80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676 not found: ID does not exist" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.919137 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676"} err="failed to get container status \"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676\": rpc error: code = NotFound desc = could not find container \"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676\": container with ID starting with 80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4979]: I0130 22:23:19.088071 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" path="/var/lib/kubelet/pods/8592764d-c12c-4340-8bc1-a8ac67545450/volumes" Jan 30 22:23:24 crc kubenswrapper[4979]: E0130 22:23:24.007859 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:34 crc kubenswrapper[4979]: E0130 22:23:34.194507 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:44 crc kubenswrapper[4979]: E0130 22:23:44.408289 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:54 crc kubenswrapper[4979]: E0130 22:23:54.602021 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:24:04 crc kubenswrapper[4979]: E0130 22:24:04.823207 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:25:02 crc kubenswrapper[4979]: I0130 22:25:02.039631 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:25:02 crc kubenswrapper[4979]: I0130 22:25:02.041507 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:25:32 crc kubenswrapper[4979]: I0130 22:25:32.040161 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:25:32 crc kubenswrapper[4979]: I0130 22:25:32.041664 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.039885 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.040703 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.040794 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.042026 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.042409 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee" gracePeriod=600 Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413241 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee" exitCode=0 Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee"} Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413332 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084"} Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413349 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.532700 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:27:55 crc kubenswrapper[4979]: E0130 22:27:55.533495 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-content" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533506 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-content" Jan 30 22:27:55 crc kubenswrapper[4979]: E0130 22:27:55.533533 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533539 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" Jan 30 22:27:55 crc kubenswrapper[4979]: E0130 22:27:55.533548 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-utilities" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533554 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-utilities" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533687 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.534656 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.550065 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.731210 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.731260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.731324 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.832697 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.832744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.832791 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.833343 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.833388 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.856449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.870047 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.111196 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.351483 4979 generic.go:334] "Generic (PLEG): container finished" podID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerID="4d42c9e9f285f592e473a3839966dfd6596c1bd71041ce1e5dfb19b925de6b58" exitCode=0 Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.351530 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"4d42c9e9f285f592e473a3839966dfd6596c1bd71041ce1e5dfb19b925de6b58"} Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.351557 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerStarted","Data":"6b99bfd4100719cf898aa3b9840bebde60ca18ebf75e1c0fca9e35c034576f49"} Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.353473 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:27:58 crc kubenswrapper[4979]: I0130 22:27:58.375916 4979 generic.go:334] "Generic (PLEG): container finished" podID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerID="e9715dcca3d3df64fcb241ca9badb177c47124563b895d7221fbe5ccdaa630ab" exitCode=0 Jan 30 22:27:58 crc kubenswrapper[4979]: I0130 22:27:58.376016 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"e9715dcca3d3df64fcb241ca9badb177c47124563b895d7221fbe5ccdaa630ab"} Jan 30 22:27:59 crc kubenswrapper[4979]: I0130 22:27:59.387579 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerStarted","Data":"4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592"} Jan 30 22:28:02 crc kubenswrapper[4979]: I0130 22:28:02.040462 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:28:02 crc kubenswrapper[4979]: I0130 22:28:02.040869 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:28:05 crc kubenswrapper[4979]: I0130 22:28:05.870318 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:05 crc kubenswrapper[4979]: I0130 22:28:05.870714 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:05 crc kubenswrapper[4979]: I0130 22:28:05.973113 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:06 crc kubenswrapper[4979]: I0130 22:28:06.004085 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4xz7x" podStartSLOduration=8.567585456 podStartE2EDuration="11.004067146s" podCreationTimestamp="2026-01-30 22:27:55 +0000 UTC" firstStartedPulling="2026-01-30 22:27:56.353259451 +0000 UTC m=+2872.314506484" lastFinishedPulling="2026-01-30 22:27:58.789741141 +0000 UTC m=+2874.750988174" observedRunningTime="2026-01-30 22:27:59.418180584 +0000 UTC m=+2875.379427617" watchObservedRunningTime="2026-01-30 22:28:06.004067146 +0000 UTC m=+2881.965314179" Jan 30 22:28:06 crc kubenswrapper[4979]: I0130 22:28:06.521837 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:06 crc kubenswrapper[4979]: I0130 22:28:06.590256 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:28:08 crc kubenswrapper[4979]: I0130 22:28:08.471951 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4xz7x" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" containerID="cri-o://4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592" gracePeriod=2 Jan 30 22:28:09 crc kubenswrapper[4979]: I0130 22:28:09.485085 4979 generic.go:334] "Generic (PLEG): container finished" podID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerID="4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592" exitCode=0 Jan 30 22:28:09 crc kubenswrapper[4979]: I0130 22:28:09.485142 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592"} Jan 30 22:28:09 crc kubenswrapper[4979]: I0130 22:28:09.990582 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.184899 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"489c8d6c-3ea7-4861-9883-bdd71844292e\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.185130 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"489c8d6c-3ea7-4861-9883-bdd71844292e\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.185534 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"489c8d6c-3ea7-4861-9883-bdd71844292e\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.186306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities" (OuterVolumeSpecName: "utilities") pod "489c8d6c-3ea7-4861-9883-bdd71844292e" (UID: "489c8d6c-3ea7-4861-9883-bdd71844292e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.193536 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm" (OuterVolumeSpecName: "kube-api-access-7smfm") pod "489c8d6c-3ea7-4861-9883-bdd71844292e" (UID: "489c8d6c-3ea7-4861-9883-bdd71844292e"). InnerVolumeSpecName "kube-api-access-7smfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.288317 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") on node \"crc\" DevicePath \"\"" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.288377 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.318846 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "489c8d6c-3ea7-4861-9883-bdd71844292e" (UID: "489c8d6c-3ea7-4861-9883-bdd71844292e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.390011 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.493802 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"6b99bfd4100719cf898aa3b9840bebde60ca18ebf75e1c0fca9e35c034576f49"} Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.493873 4979 scope.go:117] "RemoveContainer" containerID="4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.494022 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.519439 4979 scope.go:117] "RemoveContainer" containerID="e9715dcca3d3df64fcb241ca9badb177c47124563b895d7221fbe5ccdaa630ab" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.528591 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.533923 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.554309 4979 scope.go:117] "RemoveContainer" containerID="4d42c9e9f285f592e473a3839966dfd6596c1bd71041ce1e5dfb19b925de6b58" Jan 30 22:28:11 crc kubenswrapper[4979]: I0130 22:28:11.078085 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" path="/var/lib/kubelet/pods/489c8d6c-3ea7-4861-9883-bdd71844292e/volumes" Jan 30 22:28:32 crc kubenswrapper[4979]: I0130 22:28:32.040413 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:28:32 crc kubenswrapper[4979]: I0130 22:28:32.041126 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.039912 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.041682 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.041792 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.042450 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.042577 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" gracePeriod=600 Jan 30 22:29:02 crc kubenswrapper[4979]: E0130 22:29:02.168269 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.909543 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" exitCode=0 Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.909760 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084"} Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.910012 4979 scope.go:117] "RemoveContainer" containerID="29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.911126 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:02 crc kubenswrapper[4979]: E0130 22:29:02.911606 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:18 crc kubenswrapper[4979]: I0130 22:29:18.070430 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:18 crc kubenswrapper[4979]: E0130 22:29:18.071825 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.003259 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:23 crc kubenswrapper[4979]: E0130 22:29:23.004098 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-content" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004110 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-content" Jan 30 22:29:23 crc kubenswrapper[4979]: E0130 22:29:23.004133 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004139 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" Jan 30 22:29:23 crc kubenswrapper[4979]: E0130 22:29:23.004150 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-utilities" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004156 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-utilities" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004311 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.007568 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.014671 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.086725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.086795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.086975 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.188651 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.188703 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.188734 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.189208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.189275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.211709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.337641 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.594768 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:24 crc kubenswrapper[4979]: I0130 22:29:24.113499 4979 generic.go:334] "Generic (PLEG): container finished" podID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" exitCode=0 Jan 30 22:29:24 crc kubenswrapper[4979]: I0130 22:29:24.113607 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b"} Jan 30 22:29:24 crc kubenswrapper[4979]: I0130 22:29:24.114102 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerStarted","Data":"90d22e28014a6d48da0d18f0c4f9c02b3aba304fb7c185a521bba041ad9b4dea"} Jan 30 22:29:25 crc kubenswrapper[4979]: I0130 22:29:25.121585 4979 generic.go:334] "Generic (PLEG): container finished" podID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" exitCode=0 Jan 30 22:29:25 crc kubenswrapper[4979]: I0130 22:29:25.121645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5"} Jan 30 22:29:26 crc kubenswrapper[4979]: I0130 22:29:26.134392 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerStarted","Data":"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34"} Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.069713 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:33 crc kubenswrapper[4979]: E0130 22:29:33.070416 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.338528 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.338595 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.401571 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.440359 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8mnlw" podStartSLOduration=10.047718304 podStartE2EDuration="11.440337734s" podCreationTimestamp="2026-01-30 22:29:22 +0000 UTC" firstStartedPulling="2026-01-30 22:29:24.11720473 +0000 UTC m=+2960.078451793" lastFinishedPulling="2026-01-30 22:29:25.50982415 +0000 UTC m=+2961.471071223" observedRunningTime="2026-01-30 22:29:26.165531002 +0000 UTC m=+2962.126778045" watchObservedRunningTime="2026-01-30 22:29:33.440337734 +0000 UTC m=+2969.401584767" Jan 30 22:29:34 crc kubenswrapper[4979]: I0130 22:29:34.260258 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:34 crc kubenswrapper[4979]: I0130 22:29:34.309951 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.205831 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8mnlw" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" containerID="cri-o://dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" gracePeriod=2 Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.617810 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.796290 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"1a898665-750f-49f6-8989-dcaf8b7e9f03\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.796457 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"1a898665-750f-49f6-8989-dcaf8b7e9f03\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.796516 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"1a898665-750f-49f6-8989-dcaf8b7e9f03\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.797562 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities" (OuterVolumeSpecName: "utilities") pod "1a898665-750f-49f6-8989-dcaf8b7e9f03" (UID: "1a898665-750f-49f6-8989-dcaf8b7e9f03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.813823 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j" (OuterVolumeSpecName: "kube-api-access-f2j2j") pod "1a898665-750f-49f6-8989-dcaf8b7e9f03" (UID: "1a898665-750f-49f6-8989-dcaf8b7e9f03"). InnerVolumeSpecName "kube-api-access-f2j2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.825304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a898665-750f-49f6-8989-dcaf8b7e9f03" (UID: "1a898665-750f-49f6-8989-dcaf8b7e9f03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.897875 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.897910 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") on node \"crc\" DevicePath \"\"" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.897922 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.213926 4979 generic.go:334] "Generic (PLEG): container finished" podID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" exitCode=0 Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.213980 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34"} Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.214014 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"90d22e28014a6d48da0d18f0c4f9c02b3aba304fb7c185a521bba041ad9b4dea"} Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.214058 4979 scope.go:117] "RemoveContainer" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.214094 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.236862 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.242863 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.251804 4979 scope.go:117] "RemoveContainer" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.269440 4979 scope.go:117] "RemoveContainer" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296003 4979 scope.go:117] "RemoveContainer" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" Jan 30 22:29:37 crc kubenswrapper[4979]: E0130 22:29:37.296498 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34\": container with ID starting with dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34 not found: ID does not exist" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296533 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34"} err="failed to get container status \"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34\": rpc error: code = NotFound desc = could not find container \"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34\": container with ID starting with dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34 not found: ID does not exist" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296553 4979 scope.go:117] "RemoveContainer" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" Jan 30 22:29:37 crc kubenswrapper[4979]: E0130 22:29:37.296922 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5\": container with ID starting with c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5 not found: ID does not exist" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296959 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5"} err="failed to get container status \"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5\": rpc error: code = NotFound desc = could not find container \"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5\": container with ID starting with c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5 not found: ID does not exist" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296977 4979 scope.go:117] "RemoveContainer" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" Jan 30 22:29:37 crc kubenswrapper[4979]: E0130 22:29:37.297396 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b\": container with ID starting with d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b not found: ID does not exist" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.297426 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b"} err="failed to get container status \"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b\": rpc error: code = NotFound desc = could not find container \"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b\": container with ID starting with d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b not found: ID does not exist" Jan 30 22:29:39 crc kubenswrapper[4979]: I0130 22:29:39.084121 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" path="/var/lib/kubelet/pods/1a898665-750f-49f6-8989-dcaf8b7e9f03/volumes" Jan 30 22:29:48 crc kubenswrapper[4979]: I0130 22:29:48.071128 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:48 crc kubenswrapper[4979]: E0130 22:29:48.072414 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.164328 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 22:30:00 crc kubenswrapper[4979]: E0130 22:30:00.165113 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165124 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4979]: E0130 22:30:00.165133 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165140 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4979]: E0130 22:30:00.165160 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165166 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165307 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165801 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.168773 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.169429 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.178736 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.224395 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.224486 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.224671 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.327439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.327494 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.327530 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.329347 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.334272 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.346180 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.498681 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.978508 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.070124 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:01 crc kubenswrapper[4979]: E0130 22:30:01.070679 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.429704 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerID="06f1c39be4f79a10738471e24d46871dad22c8321fde40d1075b882f27317030" exitCode=0 Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.429855 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" event={"ID":"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03","Type":"ContainerDied","Data":"06f1c39be4f79a10738471e24d46871dad22c8321fde40d1075b882f27317030"} Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.430350 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" event={"ID":"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03","Type":"ContainerStarted","Data":"f46b4ebaff8ca26f8b09659d95d6c79fbd608275551a42fefa9b0ff8022bbfb0"} Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.802084 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.870699 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.870746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.870823 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.871850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" (UID: "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.876742 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc" (OuterVolumeSpecName: "kube-api-access-dkngc") pod "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" (UID: "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03"). InnerVolumeSpecName "kube-api-access-dkngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.880359 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" (UID: "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.972511 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.972549 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.972561 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.451426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" event={"ID":"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03","Type":"ContainerDied","Data":"f46b4ebaff8ca26f8b09659d95d6c79fbd608275551a42fefa9b0ff8022bbfb0"} Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.451475 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46b4ebaff8ca26f8b09659d95d6c79fbd608275551a42fefa9b0ff8022bbfb0" Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.451540 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.883240 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.889637 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 22:30:05 crc kubenswrapper[4979]: I0130 22:30:05.092853 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" path="/var/lib/kubelet/pods/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa/volumes" Jan 30 22:30:15 crc kubenswrapper[4979]: I0130 22:30:15.081546 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:15 crc kubenswrapper[4979]: E0130 22:30:15.082559 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:25 crc kubenswrapper[4979]: I0130 22:30:25.111697 4979 scope.go:117] "RemoveContainer" containerID="0ffeefd62cefc7a667955d4354abe400003540bade5b7a6dadf2ad36b308e029" Jan 30 22:30:26 crc kubenswrapper[4979]: I0130 22:30:26.069733 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:26 crc kubenswrapper[4979]: E0130 22:30:26.070777 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:41 crc kubenswrapper[4979]: I0130 22:30:41.070498 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:41 crc kubenswrapper[4979]: E0130 22:30:41.071492 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:56 crc kubenswrapper[4979]: I0130 22:30:56.069580 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:56 crc kubenswrapper[4979]: E0130 22:30:56.070644 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:10 crc kubenswrapper[4979]: I0130 22:31:10.069776 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:10 crc kubenswrapper[4979]: E0130 22:31:10.070473 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:22 crc kubenswrapper[4979]: I0130 22:31:22.070332 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:22 crc kubenswrapper[4979]: E0130 22:31:22.071406 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:36 crc kubenswrapper[4979]: I0130 22:31:36.069837 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:36 crc kubenswrapper[4979]: E0130 22:31:36.094943 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:47 crc kubenswrapper[4979]: I0130 22:31:47.079293 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:47 crc kubenswrapper[4979]: E0130 22:31:47.081866 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:00 crc kubenswrapper[4979]: I0130 22:32:00.069467 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:00 crc kubenswrapper[4979]: E0130 22:32:00.070207 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:13 crc kubenswrapper[4979]: I0130 22:32:13.069165 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:13 crc kubenswrapper[4979]: E0130 22:32:13.069891 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:24 crc kubenswrapper[4979]: I0130 22:32:24.069537 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:24 crc kubenswrapper[4979]: E0130 22:32:24.070284 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:36 crc kubenswrapper[4979]: I0130 22:32:36.069774 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:36 crc kubenswrapper[4979]: E0130 22:32:36.071343 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:49 crc kubenswrapper[4979]: I0130 22:32:49.071129 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:49 crc kubenswrapper[4979]: E0130 22:32:49.073230 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:02 crc kubenswrapper[4979]: I0130 22:33:02.069340 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:02 crc kubenswrapper[4979]: E0130 22:33:02.069957 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:17 crc kubenswrapper[4979]: I0130 22:33:17.070162 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:17 crc kubenswrapper[4979]: E0130 22:33:17.070889 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:29 crc kubenswrapper[4979]: I0130 22:33:29.070272 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:29 crc kubenswrapper[4979]: E0130 22:33:29.071190 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:41 crc kubenswrapper[4979]: I0130 22:33:41.070451 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:41 crc kubenswrapper[4979]: E0130 22:33:41.071601 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:52 crc kubenswrapper[4979]: I0130 22:33:52.069855 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:52 crc kubenswrapper[4979]: E0130 22:33:52.070557 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:34:04 crc kubenswrapper[4979]: I0130 22:34:04.070860 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:34:04 crc kubenswrapper[4979]: I0130 22:34:04.524815 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d"} Jan 30 22:36:32 crc kubenswrapper[4979]: I0130 22:36:32.039808 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:36:32 crc kubenswrapper[4979]: I0130 22:36:32.040365 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:37:02 crc kubenswrapper[4979]: I0130 22:37:02.039534 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:37:02 crc kubenswrapper[4979]: I0130 22:37:02.040106 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.976686 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:04 crc kubenswrapper[4979]: E0130 22:37:04.977483 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerName="collect-profiles" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.977499 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerName="collect-profiles" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.977671 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerName="collect-profiles" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.978901 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.993439 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.089419 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.089758 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.089897 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.190840 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191201 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191616 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191654 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.216126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.299566 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.823719 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.072969 4979 generic.go:334] "Generic (PLEG): container finished" podID="55590b62-7614-4467-9e71-a7ac065608be" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" exitCode=0 Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.073064 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb"} Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.073322 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerStarted","Data":"cb846964d8120b4f40a8b60da205d6da39b5487ee4cdf2eab7e41d39dc240a9a"} Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.074498 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:37:07 crc kubenswrapper[4979]: I0130 22:37:07.081948 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerStarted","Data":"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a"} Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.090483 4979 generic.go:334] "Generic (PLEG): container finished" podID="55590b62-7614-4467-9e71-a7ac065608be" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" exitCode=0 Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.090533 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a"} Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.187524 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.189215 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.189314 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.237134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm92m\" (UniqueName: \"kubernetes.io/projected/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-kube-api-access-dm92m\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.237198 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-catalog-content\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.237259 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-utilities\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.338375 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm92m\" (UniqueName: \"kubernetes.io/projected/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-kube-api-access-dm92m\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.338437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-catalog-content\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.339278 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-utilities\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.338472 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-utilities\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.339286 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-catalog-content\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.357004 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm92m\" (UniqueName: \"kubernetes.io/projected/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-kube-api-access-dm92m\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.525804 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.032512 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:09 crc kubenswrapper[4979]: W0130 22:37:09.037230 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f3bbd3_c82f_47bd_92fe_4dbe53982abc.slice/crio-5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd WatchSource:0}: Error finding container 5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd: Status 404 returned error can't find the container with id 5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.098785 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerStarted","Data":"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038"} Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.103967 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerStarted","Data":"5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd"} Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.118685 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbfnr" podStartSLOduration=2.740648238 podStartE2EDuration="5.118648225s" podCreationTimestamp="2026-01-30 22:37:04 +0000 UTC" firstStartedPulling="2026-01-30 22:37:06.074206466 +0000 UTC m=+3422.035453499" lastFinishedPulling="2026-01-30 22:37:08.452206453 +0000 UTC m=+3424.413453486" observedRunningTime="2026-01-30 22:37:09.115822007 +0000 UTC m=+3425.077069050" watchObservedRunningTime="2026-01-30 22:37:09.118648225 +0000 UTC m=+3425.079895258" Jan 30 22:37:10 crc kubenswrapper[4979]: I0130 22:37:10.112069 4979 generic.go:334] "Generic (PLEG): container finished" podID="83f3bbd3-c82f-47bd-92fe-4dbe53982abc" containerID="5806a5523ea433d16d91c6b9e7c9d92d1392042a624ed47b1d34a189b8892a4a" exitCode=0 Jan 30 22:37:10 crc kubenswrapper[4979]: I0130 22:37:10.112139 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerDied","Data":"5806a5523ea433d16d91c6b9e7c9d92d1392042a624ed47b1d34a189b8892a4a"} Jan 30 22:37:14 crc kubenswrapper[4979]: I0130 22:37:14.147744 4979 generic.go:334] "Generic (PLEG): container finished" podID="83f3bbd3-c82f-47bd-92fe-4dbe53982abc" containerID="59a0b05623c4fed8c9ed5ca8dfd1b830c93516a708911d0018f953b6078e7542" exitCode=0 Jan 30 22:37:14 crc kubenswrapper[4979]: I0130 22:37:14.147837 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerDied","Data":"59a0b05623c4fed8c9ed5ca8dfd1b830c93516a708911d0018f953b6078e7542"} Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.178918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerStarted","Data":"3bf6c9395780112e8c9668989fec70875d5e7948203ae87294f9415bee7bfcf8"} Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.198921 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jwmc" podStartSLOduration=2.775788769 podStartE2EDuration="7.198898907s" podCreationTimestamp="2026-01-30 22:37:08 +0000 UTC" firstStartedPulling="2026-01-30 22:37:10.115471065 +0000 UTC m=+3426.076718098" lastFinishedPulling="2026-01-30 22:37:14.538581203 +0000 UTC m=+3430.499828236" observedRunningTime="2026-01-30 22:37:15.195898345 +0000 UTC m=+3431.157145378" watchObservedRunningTime="2026-01-30 22:37:15.198898907 +0000 UTC m=+3431.160145940" Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.300802 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.300888 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.349008 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:16 crc kubenswrapper[4979]: I0130 22:37:16.239481 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:17 crc kubenswrapper[4979]: I0130 22:37:17.168718 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.206099 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bbfnr" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" containerID="cri-o://79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" gracePeriod=2 Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.526704 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.526807 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.600521 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.637768 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.712067 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"55590b62-7614-4467-9e71-a7ac065608be\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.712167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"55590b62-7614-4467-9e71-a7ac065608be\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.712226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"55590b62-7614-4467-9e71-a7ac065608be\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.713905 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities" (OuterVolumeSpecName: "utilities") pod "55590b62-7614-4467-9e71-a7ac065608be" (UID: "55590b62-7614-4467-9e71-a7ac065608be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.720241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt" (OuterVolumeSpecName: "kube-api-access-8p4mt") pod "55590b62-7614-4467-9e71-a7ac065608be" (UID: "55590b62-7614-4467-9e71-a7ac065608be"). InnerVolumeSpecName "kube-api-access-8p4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.766059 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55590b62-7614-4467-9e71-a7ac065608be" (UID: "55590b62-7614-4467-9e71-a7ac065608be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.814084 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.814123 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.814135 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.214884 4979 generic.go:334] "Generic (PLEG): container finished" podID="55590b62-7614-4467-9e71-a7ac065608be" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" exitCode=0 Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.214954 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.215023 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038"} Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.215072 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"cb846964d8120b4f40a8b60da205d6da39b5487ee4cdf2eab7e41d39dc240a9a"} Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.215090 4979 scope.go:117] "RemoveContainer" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.240857 4979 scope.go:117] "RemoveContainer" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.251999 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.256768 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.272827 4979 scope.go:117] "RemoveContainer" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.281177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.338105 4979 scope.go:117] "RemoveContainer" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" Jan 30 22:37:19 crc kubenswrapper[4979]: E0130 22:37:19.338778 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038\": container with ID starting with 79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038 not found: ID does not exist" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.338831 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038"} err="failed to get container status \"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038\": rpc error: code = NotFound desc = could not find container \"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038\": container with ID starting with 79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038 not found: ID does not exist" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.338867 4979 scope.go:117] "RemoveContainer" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" Jan 30 22:37:19 crc kubenswrapper[4979]: E0130 22:37:19.339420 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a\": container with ID starting with d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a not found: ID does not exist" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.339457 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a"} err="failed to get container status \"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a\": rpc error: code = NotFound desc = could not find container \"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a\": container with ID starting with d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a not found: ID does not exist" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.339483 4979 scope.go:117] "RemoveContainer" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" Jan 30 22:37:19 crc kubenswrapper[4979]: E0130 22:37:19.339833 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb\": container with ID starting with 004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb not found: ID does not exist" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.339889 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb"} err="failed to get container status \"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb\": rpc error: code = NotFound desc = could not find container \"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb\": container with ID starting with 004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb not found: ID does not exist" Jan 30 22:37:20 crc kubenswrapper[4979]: I0130 22:37:20.592293 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:20 crc kubenswrapper[4979]: I0130 22:37:20.966538 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 22:37:20 crc kubenswrapper[4979]: I0130 22:37:20.966865 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvj6v" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" containerID="cri-o://987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a" gracePeriod=2 Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.000620 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mvj6v" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" probeResult="failure" output="" Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.080169 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55590b62-7614-4467-9e71-a7ac065608be" path="/var/lib/kubelet/pods/55590b62-7614-4467-9e71-a7ac065608be/volumes" Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.231560 4979 generic.go:334] "Generic (PLEG): container finished" podID="135dc03e-075f-41a4-934c-8d914d497f69" containerID="987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a" exitCode=0 Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.231630 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a"} Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.243330 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"839a0e21c6342d6c49c0683bac9adda801e1ebfd8079dc25226f6fa62891ca90"} Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.243800 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839a0e21c6342d6c49c0683bac9adda801e1ebfd8079dc25226f6fa62891ca90" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.287060 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.362606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"135dc03e-075f-41a4-934c-8d914d497f69\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.362689 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"135dc03e-075f-41a4-934c-8d914d497f69\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.362800 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"135dc03e-075f-41a4-934c-8d914d497f69\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.364542 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities" (OuterVolumeSpecName: "utilities") pod "135dc03e-075f-41a4-934c-8d914d497f69" (UID: "135dc03e-075f-41a4-934c-8d914d497f69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.373762 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q" (OuterVolumeSpecName: "kube-api-access-2mp8q") pod "135dc03e-075f-41a4-934c-8d914d497f69" (UID: "135dc03e-075f-41a4-934c-8d914d497f69"). InnerVolumeSpecName "kube-api-access-2mp8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.422111 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "135dc03e-075f-41a4-934c-8d914d497f69" (UID: "135dc03e-075f-41a4-934c-8d914d497f69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.464912 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.464949 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.464958 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:23 crc kubenswrapper[4979]: I0130 22:37:23.249183 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 22:37:23 crc kubenswrapper[4979]: I0130 22:37:23.275354 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 22:37:23 crc kubenswrapper[4979]: I0130 22:37:23.281195 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.077503 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135dc03e-075f-41a4-934c-8d914d497f69" path="/var/lib/kubelet/pods/135dc03e-075f-41a4-934c-8d914d497f69/volumes" Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.282719 4979 scope.go:117] "RemoveContainer" containerID="987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a" Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.313462 4979 scope.go:117] "RemoveContainer" containerID="d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a" Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.335949 4979 scope.go:117] "RemoveContainer" containerID="2775cfa6f3efbca70770c0157c242e36a5de365efbaf9c6628031b3077d49317" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.039956 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.040643 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.040702 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.041430 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.041496 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d" gracePeriod=600 Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.319267 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d" exitCode=0 Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.319334 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d"} Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.319767 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:37:33 crc kubenswrapper[4979]: I0130 22:37:33.334699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9"} Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.071226 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073500 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073527 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073544 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073552 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073573 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073580 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073590 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073596 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073604 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073610 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073621 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073626 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073756 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073770 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.078721 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.085335 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.229697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.229779 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.229847 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331091 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331274 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331651 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331804 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.354898 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.415405 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.708421 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:12 crc kubenswrapper[4979]: I0130 22:38:12.650910 4979 generic.go:334] "Generic (PLEG): container finished" podID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" exitCode=0 Jan 30 22:38:12 crc kubenswrapper[4979]: I0130 22:38:12.651095 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2"} Jan 30 22:38:12 crc kubenswrapper[4979]: I0130 22:38:12.651399 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerStarted","Data":"11196abdf95df0f6495604477e8cd4766707c80d6e4e4037cc9f84915871ee09"} Jan 30 22:38:13 crc kubenswrapper[4979]: I0130 22:38:13.674850 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerStarted","Data":"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396"} Jan 30 22:38:14 crc kubenswrapper[4979]: I0130 22:38:14.685155 4979 generic.go:334] "Generic (PLEG): container finished" podID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" exitCode=0 Jan 30 22:38:14 crc kubenswrapper[4979]: I0130 22:38:14.685217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396"} Jan 30 22:38:15 crc kubenswrapper[4979]: I0130 22:38:15.696681 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerStarted","Data":"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908"} Jan 30 22:38:15 crc kubenswrapper[4979]: I0130 22:38:15.714173 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5bxpk" podStartSLOduration=2.286324164 podStartE2EDuration="4.714151197s" podCreationTimestamp="2026-01-30 22:38:11 +0000 UTC" firstStartedPulling="2026-01-30 22:38:12.652463795 +0000 UTC m=+3488.613710828" lastFinishedPulling="2026-01-30 22:38:15.080290828 +0000 UTC m=+3491.041537861" observedRunningTime="2026-01-30 22:38:15.710707892 +0000 UTC m=+3491.671954925" watchObservedRunningTime="2026-01-30 22:38:15.714151197 +0000 UTC m=+3491.675398230" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.416335 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.416697 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.470949 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.781336 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.833381 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:23 crc kubenswrapper[4979]: I0130 22:38:23.750820 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5bxpk" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" containerID="cri-o://859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" gracePeriod=2 Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.283535 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.442579 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"91de2670-3c8a-408b-8f65-742db32eb2a4\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.442641 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"91de2670-3c8a-408b-8f65-742db32eb2a4\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.442729 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"91de2670-3c8a-408b-8f65-742db32eb2a4\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.443687 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities" (OuterVolumeSpecName: "utilities") pod "91de2670-3c8a-408b-8f65-742db32eb2a4" (UID: "91de2670-3c8a-408b-8f65-742db32eb2a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.452673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm" (OuterVolumeSpecName: "kube-api-access-ppbfm") pod "91de2670-3c8a-408b-8f65-742db32eb2a4" (UID: "91de2670-3c8a-408b-8f65-742db32eb2a4"). InnerVolumeSpecName "kube-api-access-ppbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.544514 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") on node \"crc\" DevicePath \"\"" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.544571 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.559702 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91de2670-3c8a-408b-8f65-742db32eb2a4" (UID: "91de2670-3c8a-408b-8f65-742db32eb2a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.645766 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.771863 4979 generic.go:334] "Generic (PLEG): container finished" podID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" exitCode=0 Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.771920 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.771957 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908"} Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.772259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"11196abdf95df0f6495604477e8cd4766707c80d6e4e4037cc9f84915871ee09"} Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.772280 4979 scope.go:117] "RemoveContainer" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.789429 4979 scope.go:117] "RemoveContainer" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.807081 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.811721 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.825023 4979 scope.go:117] "RemoveContainer" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.850316 4979 scope.go:117] "RemoveContainer" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" Jan 30 22:38:25 crc kubenswrapper[4979]: E0130 22:38:25.850776 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908\": container with ID starting with 859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908 not found: ID does not exist" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.850817 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908"} err="failed to get container status \"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908\": rpc error: code = NotFound desc = could not find container \"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908\": container with ID starting with 859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908 not found: ID does not exist" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.850845 4979 scope.go:117] "RemoveContainer" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" Jan 30 22:38:25 crc kubenswrapper[4979]: E0130 22:38:25.851244 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396\": container with ID starting with 27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396 not found: ID does not exist" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.851278 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396"} err="failed to get container status \"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396\": rpc error: code = NotFound desc = could not find container \"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396\": container with ID starting with 27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396 not found: ID does not exist" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.851300 4979 scope.go:117] "RemoveContainer" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" Jan 30 22:38:25 crc kubenswrapper[4979]: E0130 22:38:25.851654 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2\": container with ID starting with 645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2 not found: ID does not exist" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.851686 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2"} err="failed to get container status \"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2\": rpc error: code = NotFound desc = could not find container \"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2\": container with ID starting with 645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2 not found: ID does not exist" Jan 30 22:38:27 crc kubenswrapper[4979]: I0130 22:38:27.078919 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" path="/var/lib/kubelet/pods/91de2670-3c8a-408b-8f65-742db32eb2a4/volumes" Jan 30 22:39:32 crc kubenswrapper[4979]: I0130 22:39:32.039363 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:39:32 crc kubenswrapper[4979]: I0130 22:39:32.039835 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.552183 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:46 crc kubenswrapper[4979]: E0130 22:39:46.552973 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-content" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.552985 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-content" Jan 30 22:39:46 crc kubenswrapper[4979]: E0130 22:39:46.552998 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.553004 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" Jan 30 22:39:46 crc kubenswrapper[4979]: E0130 22:39:46.553020 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-utilities" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.553027 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-utilities" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.553182 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.554078 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.567828 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.593552 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.593625 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.593656 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.694625 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.694695 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.694723 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.695213 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.695258 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.719799 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.875722 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:47 crc kubenswrapper[4979]: I0130 22:39:47.290232 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:47 crc kubenswrapper[4979]: I0130 22:39:47.409408 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerStarted","Data":"d46def05756d6dc16cd8a2911dd8cc842950b9f35c428cde368fcb6a9dc7f78f"} Jan 30 22:39:48 crc kubenswrapper[4979]: I0130 22:39:48.419942 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" exitCode=0 Jan 30 22:39:48 crc kubenswrapper[4979]: I0130 22:39:48.420025 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722"} Jan 30 22:39:49 crc kubenswrapper[4979]: I0130 22:39:49.428181 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" exitCode=0 Jan 30 22:39:49 crc kubenswrapper[4979]: I0130 22:39:49.428305 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6"} Jan 30 22:39:50 crc kubenswrapper[4979]: I0130 22:39:50.441358 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerStarted","Data":"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2"} Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.876882 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.877542 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.929602 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.954621 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-66xk9" podStartSLOduration=9.513907694 podStartE2EDuration="10.954601574s" podCreationTimestamp="2026-01-30 22:39:46 +0000 UTC" firstStartedPulling="2026-01-30 22:39:48.421935315 +0000 UTC m=+3584.383182348" lastFinishedPulling="2026-01-30 22:39:49.862629195 +0000 UTC m=+3585.823876228" observedRunningTime="2026-01-30 22:39:50.469930314 +0000 UTC m=+3586.431177367" watchObservedRunningTime="2026-01-30 22:39:56.954601574 +0000 UTC m=+3592.915848627" Jan 30 22:39:57 crc kubenswrapper[4979]: I0130 22:39:57.542084 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:57 crc kubenswrapper[4979]: I0130 22:39:57.597943 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:59 crc kubenswrapper[4979]: I0130 22:39:59.512945 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-66xk9" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" containerID="cri-o://190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" gracePeriod=2 Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.019743 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.081961 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.086497 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464" (OuterVolumeSpecName: "kube-api-access-sd464") pod "c3826ec4-db18-474e-8fbf-0f4fd2c4669f" (UID: "c3826ec4-db18-474e-8fbf-0f4fd2c4669f"). InnerVolumeSpecName "kube-api-access-sd464". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.183617 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.183670 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.184195 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") on node \"crc\" DevicePath \"\"" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.184612 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities" (OuterVolumeSpecName: "utilities") pod "c3826ec4-db18-474e-8fbf-0f4fd2c4669f" (UID: "c3826ec4-db18-474e-8fbf-0f4fd2c4669f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.213284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3826ec4-db18-474e-8fbf-0f4fd2c4669f" (UID: "c3826ec4-db18-474e-8fbf-0f4fd2c4669f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.285696 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.285733 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524143 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" exitCode=0 Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2"} Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524242 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524288 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"d46def05756d6dc16cd8a2911dd8cc842950b9f35c428cde368fcb6a9dc7f78f"} Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524332 4979 scope.go:117] "RemoveContainer" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.553484 4979 scope.go:117] "RemoveContainer" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.572939 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.583159 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.595456 4979 scope.go:117] "RemoveContainer" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.613163 4979 scope.go:117] "RemoveContainer" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" Jan 30 22:40:00 crc kubenswrapper[4979]: E0130 22:40:00.613705 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2\": container with ID starting with 190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2 not found: ID does not exist" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.613823 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2"} err="failed to get container status \"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2\": rpc error: code = NotFound desc = could not find container \"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2\": container with ID starting with 190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2 not found: ID does not exist" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.613917 4979 scope.go:117] "RemoveContainer" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" Jan 30 22:40:00 crc kubenswrapper[4979]: E0130 22:40:00.614359 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6\": container with ID starting with 81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6 not found: ID does not exist" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.614393 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6"} err="failed to get container status \"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6\": rpc error: code = NotFound desc = could not find container \"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6\": container with ID starting with 81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6 not found: ID does not exist" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.614416 4979 scope.go:117] "RemoveContainer" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" Jan 30 22:40:00 crc kubenswrapper[4979]: E0130 22:40:00.614738 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722\": container with ID starting with 5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722 not found: ID does not exist" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.614773 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722"} err="failed to get container status \"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722\": rpc error: code = NotFound desc = could not find container \"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722\": container with ID starting with 5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722 not found: ID does not exist" Jan 30 22:40:01 crc kubenswrapper[4979]: I0130 22:40:01.086387 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" path="/var/lib/kubelet/pods/c3826ec4-db18-474e-8fbf-0f4fd2c4669f/volumes" Jan 30 22:40:02 crc kubenswrapper[4979]: I0130 22:40:02.040002 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:40:02 crc kubenswrapper[4979]: I0130 22:40:02.040164 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.039915 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.040929 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.041014 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.042310 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.042409 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" gracePeriod=600 Jan 30 22:40:32 crc kubenswrapper[4979]: E0130 22:40:32.168129 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.786643 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" exitCode=0 Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.786704 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9"} Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.786752 4979 scope.go:117] "RemoveContainer" containerID="6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.787385 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:40:32 crc kubenswrapper[4979]: E0130 22:40:32.787650 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:40:43 crc kubenswrapper[4979]: I0130 22:40:43.070527 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:40:43 crc kubenswrapper[4979]: E0130 22:40:43.071640 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:40:55 crc kubenswrapper[4979]: I0130 22:40:55.077955 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:40:55 crc kubenswrapper[4979]: E0130 22:40:55.080025 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:06 crc kubenswrapper[4979]: I0130 22:41:06.070311 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:06 crc kubenswrapper[4979]: E0130 22:41:06.071168 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:20 crc kubenswrapper[4979]: I0130 22:41:20.070138 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:20 crc kubenswrapper[4979]: E0130 22:41:20.071021 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:33 crc kubenswrapper[4979]: I0130 22:41:33.070259 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:33 crc kubenswrapper[4979]: E0130 22:41:33.071089 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:45 crc kubenswrapper[4979]: I0130 22:41:45.081503 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:45 crc kubenswrapper[4979]: E0130 22:41:45.082527 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:56 crc kubenswrapper[4979]: I0130 22:41:56.069252 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:56 crc kubenswrapper[4979]: E0130 22:41:56.070085 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:08 crc kubenswrapper[4979]: I0130 22:42:08.069848 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:08 crc kubenswrapper[4979]: E0130 22:42:08.071451 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:23 crc kubenswrapper[4979]: I0130 22:42:23.070592 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:23 crc kubenswrapper[4979]: E0130 22:42:23.071361 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:36 crc kubenswrapper[4979]: I0130 22:42:36.069913 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:36 crc kubenswrapper[4979]: E0130 22:42:36.070696 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:49 crc kubenswrapper[4979]: I0130 22:42:49.070845 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:49 crc kubenswrapper[4979]: E0130 22:42:49.071724 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:00 crc kubenswrapper[4979]: I0130 22:43:00.069699 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:00 crc kubenswrapper[4979]: E0130 22:43:00.070638 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:15 crc kubenswrapper[4979]: I0130 22:43:15.077842 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:15 crc kubenswrapper[4979]: E0130 22:43:15.078772 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:30 crc kubenswrapper[4979]: I0130 22:43:30.069140 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:30 crc kubenswrapper[4979]: E0130 22:43:30.069903 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:43 crc kubenswrapper[4979]: I0130 22:43:43.069350 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:43 crc kubenswrapper[4979]: E0130 22:43:43.070148 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:57 crc kubenswrapper[4979]: I0130 22:43:57.069311 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:57 crc kubenswrapper[4979]: E0130 22:43:57.071076 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:11 crc kubenswrapper[4979]: I0130 22:44:11.069143 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:11 crc kubenswrapper[4979]: E0130 22:44:11.069879 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:23 crc kubenswrapper[4979]: I0130 22:44:23.070428 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:23 crc kubenswrapper[4979]: E0130 22:44:23.070878 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:38 crc kubenswrapper[4979]: I0130 22:44:38.070287 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:38 crc kubenswrapper[4979]: E0130 22:44:38.071334 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:51 crc kubenswrapper[4979]: I0130 22:44:51.069951 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:51 crc kubenswrapper[4979]: E0130 22:44:51.070681 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.181108 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 22:45:00 crc kubenswrapper[4979]: E0130 22:45:00.182180 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182207 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4979]: E0130 22:45:00.182237 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4979]: E0130 22:45:00.182272 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182283 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182562 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.183335 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.186876 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.187137 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.189990 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.290291 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.290405 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.290443 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.391864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.391923 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.394537 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.395409 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.407976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.408750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.504942 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.927778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 22:45:00 crc kubenswrapper[4979]: W0130 22:45:00.938213 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod104b2fbe_7925_4ef8_afca_adf78844b1e4.slice/crio-8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1 WatchSource:0}: Error finding container 8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1: Status 404 returned error can't find the container with id 8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1 Jan 30 22:45:01 crc kubenswrapper[4979]: I0130 22:45:01.003999 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" event={"ID":"104b2fbe-7925-4ef8-afca-adf78844b1e4","Type":"ContainerStarted","Data":"8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1"} Jan 30 22:45:02 crc kubenswrapper[4979]: I0130 22:45:02.011931 4979 generic.go:334] "Generic (PLEG): container finished" podID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerID="f4376d94646a15043c11ecee25a291d34f53ab6e158c8bf8bf94d2318ee02027" exitCode=0 Jan 30 22:45:02 crc kubenswrapper[4979]: I0130 22:45:02.012078 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" event={"ID":"104b2fbe-7925-4ef8-afca-adf78844b1e4","Type":"ContainerDied","Data":"f4376d94646a15043c11ecee25a291d34f53ab6e158c8bf8bf94d2318ee02027"} Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.239537 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.336791 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"104b2fbe-7925-4ef8-afca-adf78844b1e4\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.336905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"104b2fbe-7925-4ef8-afca-adf78844b1e4\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.336988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"104b2fbe-7925-4ef8-afca-adf78844b1e4\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.337527 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "104b2fbe-7925-4ef8-afca-adf78844b1e4" (UID: "104b2fbe-7925-4ef8-afca-adf78844b1e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.338210 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.341933 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd" (OuterVolumeSpecName: "kube-api-access-mkjdd") pod "104b2fbe-7925-4ef8-afca-adf78844b1e4" (UID: "104b2fbe-7925-4ef8-afca-adf78844b1e4"). InnerVolumeSpecName "kube-api-access-mkjdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.343320 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "104b2fbe-7925-4ef8-afca-adf78844b1e4" (UID: "104b2fbe-7925-4ef8-afca-adf78844b1e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.439619 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.439655 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.026736 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" event={"ID":"104b2fbe-7925-4ef8-afca-adf78844b1e4","Type":"ContainerDied","Data":"8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1"} Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.026771 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1" Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.026808 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.303456 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.310581 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:45:05 crc kubenswrapper[4979]: I0130 22:45:05.076930 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:05 crc kubenswrapper[4979]: E0130 22:45:05.077475 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:05 crc kubenswrapper[4979]: I0130 22:45:05.092716 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" path="/var/lib/kubelet/pods/365cfffa-828e-4f0e-9903-4c1580e20c67/volumes" Jan 30 22:45:18 crc kubenswrapper[4979]: I0130 22:45:18.069331 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:18 crc kubenswrapper[4979]: E0130 22:45:18.069983 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:25 crc kubenswrapper[4979]: I0130 22:45:25.544252 4979 scope.go:117] "RemoveContainer" containerID="63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68" Jan 30 22:45:30 crc kubenswrapper[4979]: I0130 22:45:30.070241 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:30 crc kubenswrapper[4979]: E0130 22:45:30.071132 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:43 crc kubenswrapper[4979]: I0130 22:45:43.070302 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:43 crc kubenswrapper[4979]: I0130 22:45:43.296090 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e"} Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.337585 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:27 crc kubenswrapper[4979]: E0130 22:47:27.345711 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerName="collect-profiles" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.345733 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerName="collect-profiles" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.345930 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerName="collect-profiles" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.347230 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.347710 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.493585 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.493637 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.493796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.594567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.594626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.594647 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.595235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.595292 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.615459 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.670696 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:28 crc kubenswrapper[4979]: I0130 22:47:28.201944 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.119448 4979 generic.go:334] "Generic (PLEG): container finished" podID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerID="4283e036f0bfd880adf92b50ac2f32a4a2845dd240c425f041c8745290cf9cd6" exitCode=0 Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.119563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"4283e036f0bfd880adf92b50ac2f32a4a2845dd240c425f041c8745290cf9cd6"} Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.119805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerStarted","Data":"2c8eee9870667e78df791eca9d462625a8b2ae9eab002a6e958a2d7adf4b6611"} Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.123005 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:47:30 crc kubenswrapper[4979]: I0130 22:47:30.128078 4979 generic.go:334] "Generic (PLEG): container finished" podID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerID="bee75989ae32b9e3da9cd5d54c7b52fae48857d4c521afab1b9f1195918e3919" exitCode=0 Jan 30 22:47:30 crc kubenswrapper[4979]: I0130 22:47:30.128174 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"bee75989ae32b9e3da9cd5d54c7b52fae48857d4c521afab1b9f1195918e3919"} Jan 30 22:47:31 crc kubenswrapper[4979]: I0130 22:47:31.138213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerStarted","Data":"3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79"} Jan 30 22:47:31 crc kubenswrapper[4979]: I0130 22:47:31.176573 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bgb7v" podStartSLOduration=2.554934426 podStartE2EDuration="4.176548653s" podCreationTimestamp="2026-01-30 22:47:27 +0000 UTC" firstStartedPulling="2026-01-30 22:47:29.122555357 +0000 UTC m=+4045.083802430" lastFinishedPulling="2026-01-30 22:47:30.744169634 +0000 UTC m=+4046.705416657" observedRunningTime="2026-01-30 22:47:31.157126238 +0000 UTC m=+4047.118373271" watchObservedRunningTime="2026-01-30 22:47:31.176548653 +0000 UTC m=+4047.137795686" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.049740 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.051844 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.065258 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.222815 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.223059 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.223294 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.324519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.324667 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.324710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.325383 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.325469 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.353848 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.377439 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.843141 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:36 crc kubenswrapper[4979]: I0130 22:47:36.169067 4979 generic.go:334] "Generic (PLEG): container finished" podID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" exitCode=0 Jan 30 22:47:36 crc kubenswrapper[4979]: I0130 22:47:36.169109 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf"} Jan 30 22:47:36 crc kubenswrapper[4979]: I0130 22:47:36.169134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerStarted","Data":"ab59619a27c710eb68b79d0a064ccdbed30ed0efc3ed64a23d934642a11a4801"} Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.178689 4979 generic.go:334] "Generic (PLEG): container finished" podID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" exitCode=0 Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.178778 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a"} Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.671084 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.671674 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.711862 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:38 crc kubenswrapper[4979]: I0130 22:47:38.188222 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerStarted","Data":"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d"} Jan 30 22:47:38 crc kubenswrapper[4979]: I0130 22:47:38.210999 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kw66v" podStartSLOduration=1.839325543 podStartE2EDuration="3.21098438s" podCreationTimestamp="2026-01-30 22:47:35 +0000 UTC" firstStartedPulling="2026-01-30 22:47:36.170539821 +0000 UTC m=+4052.131786854" lastFinishedPulling="2026-01-30 22:47:37.542198658 +0000 UTC m=+4053.503445691" observedRunningTime="2026-01-30 22:47:38.205551403 +0000 UTC m=+4054.166798436" watchObservedRunningTime="2026-01-30 22:47:38.21098438 +0000 UTC m=+4054.172231403" Jan 30 22:47:38 crc kubenswrapper[4979]: I0130 22:47:38.241861 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:40 crc kubenswrapper[4979]: I0130 22:47:40.014164 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:41 crc kubenswrapper[4979]: I0130 22:47:41.211981 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bgb7v" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" containerID="cri-o://3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79" gracePeriod=2 Jan 30 22:47:42 crc kubenswrapper[4979]: I0130 22:47:42.222163 4979 generic.go:334] "Generic (PLEG): container finished" podID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerID="3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79" exitCode=0 Jan 30 22:47:42 crc kubenswrapper[4979]: I0130 22:47:42.222643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79"} Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.021279 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.137250 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.137318 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.137374 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.138915 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities" (OuterVolumeSpecName: "utilities") pod "db22aed9-7413-4d06-8b61-fb6f730cf1cc" (UID: "db22aed9-7413-4d06-8b61-fb6f730cf1cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.145127 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc" (OuterVolumeSpecName: "kube-api-access-57mcc") pod "db22aed9-7413-4d06-8b61-fb6f730cf1cc" (UID: "db22aed9-7413-4d06-8b61-fb6f730cf1cc"). InnerVolumeSpecName "kube-api-access-57mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.193281 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db22aed9-7413-4d06-8b61-fb6f730cf1cc" (UID: "db22aed9-7413-4d06-8b61-fb6f730cf1cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.231435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"2c8eee9870667e78df791eca9d462625a8b2ae9eab002a6e958a2d7adf4b6611"} Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.231487 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.231501 4979 scope.go:117] "RemoveContainer" containerID="3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.238664 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.238690 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.238698 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.257874 4979 scope.go:117] "RemoveContainer" containerID="bee75989ae32b9e3da9cd5d54c7b52fae48857d4c521afab1b9f1195918e3919" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.269007 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.274616 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.286032 4979 scope.go:117] "RemoveContainer" containerID="4283e036f0bfd880adf92b50ac2f32a4a2845dd240c425f041c8745290cf9cd6" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.083130 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" path="/var/lib/kubelet/pods/db22aed9-7413-4d06-8b61-fb6f730cf1cc/volumes" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.378547 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.378632 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.439437 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:46 crc kubenswrapper[4979]: I0130 22:47:46.300690 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:47 crc kubenswrapper[4979]: I0130 22:47:47.014883 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.267121 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kw66v" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" containerID="cri-o://922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" gracePeriod=2 Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.671141 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.818243 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.818754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.818900 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.820092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities" (OuterVolumeSpecName: "utilities") pod "06f8e9b3-9b00-4fcb-ae98-1fac6314845e" (UID: "06f8e9b3-9b00-4fcb-ae98-1fac6314845e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.824245 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27" (OuterVolumeSpecName: "kube-api-access-bpf27") pod "06f8e9b3-9b00-4fcb-ae98-1fac6314845e" (UID: "06f8e9b3-9b00-4fcb-ae98-1fac6314845e"). InnerVolumeSpecName "kube-api-access-bpf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.867510 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06f8e9b3-9b00-4fcb-ae98-1fac6314845e" (UID: "06f8e9b3-9b00-4fcb-ae98-1fac6314845e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.920775 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.920816 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.920828 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.275988 4979 generic.go:334] "Generic (PLEG): container finished" podID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" exitCode=0 Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.276127 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.276158 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d"} Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.277113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"ab59619a27c710eb68b79d0a064ccdbed30ed0efc3ed64a23d934642a11a4801"} Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.277196 4979 scope.go:117] "RemoveContainer" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.302109 4979 scope.go:117] "RemoveContainer" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.304569 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.312250 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.319959 4979 scope.go:117] "RemoveContainer" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.361017 4979 scope.go:117] "RemoveContainer" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" Jan 30 22:47:49 crc kubenswrapper[4979]: E0130 22:47:49.361660 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d\": container with ID starting with 922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d not found: ID does not exist" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.361712 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d"} err="failed to get container status \"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d\": rpc error: code = NotFound desc = could not find container \"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d\": container with ID starting with 922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d not found: ID does not exist" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.361742 4979 scope.go:117] "RemoveContainer" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" Jan 30 22:47:49 crc kubenswrapper[4979]: E0130 22:47:49.362256 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a\": container with ID starting with 31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a not found: ID does not exist" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.362308 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a"} err="failed to get container status \"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a\": rpc error: code = NotFound desc = could not find container \"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a\": container with ID starting with 31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a not found: ID does not exist" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.362326 4979 scope.go:117] "RemoveContainer" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" Jan 30 22:47:49 crc kubenswrapper[4979]: E0130 22:47:49.362624 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf\": container with ID starting with 26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf not found: ID does not exist" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.362649 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf"} err="failed to get container status \"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf\": rpc error: code = NotFound desc = could not find container \"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf\": container with ID starting with 26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf not found: ID does not exist" Jan 30 22:47:51 crc kubenswrapper[4979]: I0130 22:47:51.077640 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" path="/var/lib/kubelet/pods/06f8e9b3-9b00-4fcb-ae98-1fac6314845e/volumes" Jan 30 22:48:02 crc kubenswrapper[4979]: I0130 22:48:02.039376 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:48:02 crc kubenswrapper[4979]: I0130 22:48:02.040208 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.035402 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036481 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036505 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036524 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036535 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036549 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036563 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036577 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036588 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036610 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036621 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036641 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036651 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036875 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036908 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.038487 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.045376 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.146646 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.146751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.146781 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.247683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.247732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.247822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.248309 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.248377 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.770446 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.960057 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:25 crc kubenswrapper[4979]: I0130 22:48:25.470831 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:25 crc kubenswrapper[4979]: I0130 22:48:25.518989 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerStarted","Data":"0d28c1c09d1376ad3a53c665c8252e3a7d5a04a540cf91d15d8d747c76858a84"} Jan 30 22:48:26 crc kubenswrapper[4979]: I0130 22:48:26.528500 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" exitCode=0 Jan 30 22:48:26 crc kubenswrapper[4979]: I0130 22:48:26.528545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea"} Jan 30 22:48:28 crc kubenswrapper[4979]: I0130 22:48:28.542947 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" exitCode=0 Jan 30 22:48:28 crc kubenswrapper[4979]: I0130 22:48:28.543071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9"} Jan 30 22:48:29 crc kubenswrapper[4979]: I0130 22:48:29.551518 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerStarted","Data":"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844"} Jan 30 22:48:29 crc kubenswrapper[4979]: I0130 22:48:29.577121 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5h54" podStartSLOduration=2.813037983 podStartE2EDuration="5.577090269s" podCreationTimestamp="2026-01-30 22:48:24 +0000 UTC" firstStartedPulling="2026-01-30 22:48:26.530872751 +0000 UTC m=+4102.492119784" lastFinishedPulling="2026-01-30 22:48:29.294925037 +0000 UTC m=+4105.256172070" observedRunningTime="2026-01-30 22:48:29.572614468 +0000 UTC m=+4105.533861521" watchObservedRunningTime="2026-01-30 22:48:29.577090269 +0000 UTC m=+4105.538337312" Jan 30 22:48:32 crc kubenswrapper[4979]: I0130 22:48:32.039310 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:48:32 crc kubenswrapper[4979]: I0130 22:48:32.039755 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:34 crc kubenswrapper[4979]: I0130 22:48:34.960338 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:34 crc kubenswrapper[4979]: I0130 22:48:34.960794 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:35 crc kubenswrapper[4979]: I0130 22:48:35.999937 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5h54" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" probeResult="failure" output=< Jan 30 22:48:35 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 22:48:35 crc kubenswrapper[4979]: > Jan 30 22:48:45 crc kubenswrapper[4979]: I0130 22:48:45.020636 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:45 crc kubenswrapper[4979]: I0130 22:48:45.094556 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:45 crc kubenswrapper[4979]: I0130 22:48:45.279054 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:46 crc kubenswrapper[4979]: I0130 22:48:46.696059 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5h54" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" containerID="cri-o://d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" gracePeriod=2 Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.636350 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.704930 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" exitCode=0 Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.704996 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844"} Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.705309 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"0d28c1c09d1376ad3a53c665c8252e3a7d5a04a540cf91d15d8d747c76858a84"} Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.705334 4979 scope.go:117] "RemoveContainer" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.705020 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.721345 4979 scope.go:117] "RemoveContainer" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.739047 4979 scope.go:117] "RemoveContainer" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.760936 4979 scope.go:117] "RemoveContainer" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" Jan 30 22:48:47 crc kubenswrapper[4979]: E0130 22:48:47.761339 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844\": container with ID starting with d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844 not found: ID does not exist" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761373 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844"} err="failed to get container status \"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844\": rpc error: code = NotFound desc = could not find container \"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844\": container with ID starting with d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844 not found: ID does not exist" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761394 4979 scope.go:117] "RemoveContainer" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" Jan 30 22:48:47 crc kubenswrapper[4979]: E0130 22:48:47.761607 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9\": container with ID starting with 51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9 not found: ID does not exist" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761628 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9"} err="failed to get container status \"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9\": rpc error: code = NotFound desc = could not find container \"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9\": container with ID starting with 51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9 not found: ID does not exist" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761641 4979 scope.go:117] "RemoveContainer" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" Jan 30 22:48:47 crc kubenswrapper[4979]: E0130 22:48:47.761812 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea\": container with ID starting with 4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea not found: ID does not exist" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761830 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea"} err="failed to get container status \"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea\": rpc error: code = NotFound desc = could not find container \"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea\": container with ID starting with 4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea not found: ID does not exist" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.814429 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.814567 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.814637 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.815868 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities" (OuterVolumeSpecName: "utilities") pod "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" (UID: "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.821145 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz" (OuterVolumeSpecName: "kube-api-access-rkngz") pod "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" (UID: "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f"). InnerVolumeSpecName "kube-api-access-rkngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.916815 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") on node \"crc\" DevicePath \"\"" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.916861 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.949581 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" (UID: "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:48:48 crc kubenswrapper[4979]: I0130 22:48:48.017782 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:48:48 crc kubenswrapper[4979]: I0130 22:48:48.039480 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:48 crc kubenswrapper[4979]: I0130 22:48:48.046135 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:49 crc kubenswrapper[4979]: I0130 22:48:49.082582 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" path="/var/lib/kubelet/pods/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f/volumes" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.040232 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.040703 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.040749 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.041508 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.041563 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e" gracePeriod=600 Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.834972 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e" exitCode=0 Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.835024 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e"} Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.835647 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856"} Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.835677 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.542144 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:00 crc kubenswrapper[4979]: E0130 22:50:00.543347 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-utilities" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543366 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-utilities" Jan 30 22:50:00 crc kubenswrapper[4979]: E0130 22:50:00.543389 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543409 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" Jan 30 22:50:00 crc kubenswrapper[4979]: E0130 22:50:00.543436 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-content" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543444 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-content" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543618 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.545013 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.552629 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.598603 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.599070 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.599233 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.701340 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.701490 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.701527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.702231 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.702311 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.721842 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.885684 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:01 crc kubenswrapper[4979]: I0130 22:50:01.141892 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:01 crc kubenswrapper[4979]: I0130 22:50:01.289187 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677"} Jan 30 22:50:01 crc kubenswrapper[4979]: I0130 22:50:01.289239 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"21f0dc9adbd59b37846726239ed1298deaed53c89051139335e0150ee34b243c"} Jan 30 22:50:02 crc kubenswrapper[4979]: I0130 22:50:02.303436 4979 generic.go:334] "Generic (PLEG): container finished" podID="499781fa-40ab-4183-98f0-9ebb2907672d" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" exitCode=0 Jan 30 22:50:02 crc kubenswrapper[4979]: I0130 22:50:02.303549 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677"} Jan 30 22:50:03 crc kubenswrapper[4979]: I0130 22:50:03.314249 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2"} Jan 30 22:50:04 crc kubenswrapper[4979]: I0130 22:50:04.330371 4979 generic.go:334] "Generic (PLEG): container finished" podID="499781fa-40ab-4183-98f0-9ebb2907672d" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" exitCode=0 Jan 30 22:50:04 crc kubenswrapper[4979]: I0130 22:50:04.330502 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2"} Jan 30 22:50:06 crc kubenswrapper[4979]: I0130 22:50:06.350868 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3"} Jan 30 22:50:06 crc kubenswrapper[4979]: I0130 22:50:06.383669 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hszjp" podStartSLOduration=3.949433884 podStartE2EDuration="6.383650429s" podCreationTimestamp="2026-01-30 22:50:00 +0000 UTC" firstStartedPulling="2026-01-30 22:50:02.307715143 +0000 UTC m=+4198.268962196" lastFinishedPulling="2026-01-30 22:50:04.741931698 +0000 UTC m=+4200.703178741" observedRunningTime="2026-01-30 22:50:06.381622205 +0000 UTC m=+4202.342869248" watchObservedRunningTime="2026-01-30 22:50:06.383650429 +0000 UTC m=+4202.344897472" Jan 30 22:50:10 crc kubenswrapper[4979]: I0130 22:50:10.885865 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:10 crc kubenswrapper[4979]: I0130 22:50:10.886492 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:10 crc kubenswrapper[4979]: I0130 22:50:10.947638 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:11 crc kubenswrapper[4979]: I0130 22:50:11.477179 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:11 crc kubenswrapper[4979]: I0130 22:50:11.566340 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:13 crc kubenswrapper[4979]: I0130 22:50:13.414626 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hszjp" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" containerID="cri-o://a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" gracePeriod=2 Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.054053 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.171116 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"499781fa-40ab-4183-98f0-9ebb2907672d\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.171258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"499781fa-40ab-4183-98f0-9ebb2907672d\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.171285 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"499781fa-40ab-4183-98f0-9ebb2907672d\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.172666 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities" (OuterVolumeSpecName: "utilities") pod "499781fa-40ab-4183-98f0-9ebb2907672d" (UID: "499781fa-40ab-4183-98f0-9ebb2907672d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.182307 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59" (OuterVolumeSpecName: "kube-api-access-njf59") pod "499781fa-40ab-4183-98f0-9ebb2907672d" (UID: "499781fa-40ab-4183-98f0-9ebb2907672d"). InnerVolumeSpecName "kube-api-access-njf59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.201321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "499781fa-40ab-4183-98f0-9ebb2907672d" (UID: "499781fa-40ab-4183-98f0-9ebb2907672d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.272950 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.273005 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") on node \"crc\" DevicePath \"\"" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.273022 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428806 4979 generic.go:334] "Generic (PLEG): container finished" podID="499781fa-40ab-4183-98f0-9ebb2907672d" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" exitCode=0 Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428878 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3"} Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428902 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428924 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"21f0dc9adbd59b37846726239ed1298deaed53c89051139335e0150ee34b243c"} Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428952 4979 scope.go:117] "RemoveContainer" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.465772 4979 scope.go:117] "RemoveContainer" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.465895 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.470808 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.491164 4979 scope.go:117] "RemoveContainer" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.517016 4979 scope.go:117] "RemoveContainer" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" Jan 30 22:50:14 crc kubenswrapper[4979]: E0130 22:50:14.517629 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3\": container with ID starting with a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3 not found: ID does not exist" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.517901 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3"} err="failed to get container status \"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3\": rpc error: code = NotFound desc = could not find container \"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3\": container with ID starting with a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3 not found: ID does not exist" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.518162 4979 scope.go:117] "RemoveContainer" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" Jan 30 22:50:14 crc kubenswrapper[4979]: E0130 22:50:14.518847 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2\": container with ID starting with 4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2 not found: ID does not exist" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.518928 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2"} err="failed to get container status \"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2\": rpc error: code = NotFound desc = could not find container \"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2\": container with ID starting with 4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2 not found: ID does not exist" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.518983 4979 scope.go:117] "RemoveContainer" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" Jan 30 22:50:14 crc kubenswrapper[4979]: E0130 22:50:14.519475 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677\": container with ID starting with 6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677 not found: ID does not exist" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.519519 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677"} err="failed to get container status \"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677\": rpc error: code = NotFound desc = could not find container \"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677\": container with ID starting with 6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677 not found: ID does not exist" Jan 30 22:50:15 crc kubenswrapper[4979]: I0130 22:50:15.084878 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" path="/var/lib/kubelet/pods/499781fa-40ab-4183-98f0-9ebb2907672d/volumes" Jan 30 22:51:02 crc kubenswrapper[4979]: I0130 22:51:02.039670 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:51:02 crc kubenswrapper[4979]: I0130 22:51:02.040451 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:51:32 crc kubenswrapper[4979]: I0130 22:51:32.039697 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:51:32 crc kubenswrapper[4979]: I0130 22:51:32.040656 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.039218 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.039967 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.040018 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.040725 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.040800 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" gracePeriod=600 Jan 30 22:52:02 crc kubenswrapper[4979]: E0130 22:52:02.159155 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.265699 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" exitCode=0 Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.265740 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856"} Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.265784 4979 scope.go:117] "RemoveContainer" containerID="ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.266603 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:02 crc kubenswrapper[4979]: E0130 22:52:02.267128 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:15 crc kubenswrapper[4979]: I0130 22:52:15.079522 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:15 crc kubenswrapper[4979]: E0130 22:52:15.081953 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:29 crc kubenswrapper[4979]: I0130 22:52:29.069489 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:29 crc kubenswrapper[4979]: E0130 22:52:29.070257 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:40 crc kubenswrapper[4979]: I0130 22:52:40.070171 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:40 crc kubenswrapper[4979]: E0130 22:52:40.070939 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:51 crc kubenswrapper[4979]: I0130 22:52:51.070405 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:51 crc kubenswrapper[4979]: E0130 22:52:51.071568 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:03 crc kubenswrapper[4979]: I0130 22:53:03.070197 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:03 crc kubenswrapper[4979]: E0130 22:53:03.071593 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:18 crc kubenswrapper[4979]: I0130 22:53:18.070513 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:18 crc kubenswrapper[4979]: E0130 22:53:18.071313 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:32 crc kubenswrapper[4979]: I0130 22:53:32.069742 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:32 crc kubenswrapper[4979]: E0130 22:53:32.070823 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:45 crc kubenswrapper[4979]: I0130 22:53:45.079601 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:45 crc kubenswrapper[4979]: E0130 22:53:45.080570 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:58 crc kubenswrapper[4979]: I0130 22:53:58.070121 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:58 crc kubenswrapper[4979]: E0130 22:53:58.070866 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:13 crc kubenswrapper[4979]: I0130 22:54:13.069803 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:13 crc kubenswrapper[4979]: E0130 22:54:13.071016 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:27 crc kubenswrapper[4979]: I0130 22:54:27.069914 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:27 crc kubenswrapper[4979]: E0130 22:54:27.070752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:38 crc kubenswrapper[4979]: I0130 22:54:38.070529 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:38 crc kubenswrapper[4979]: E0130 22:54:38.071365 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:52 crc kubenswrapper[4979]: I0130 22:54:52.069737 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:52 crc kubenswrapper[4979]: E0130 22:54:52.070829 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:06 crc kubenswrapper[4979]: I0130 22:55:06.069557 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:06 crc kubenswrapper[4979]: E0130 22:55:06.070690 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:21 crc kubenswrapper[4979]: I0130 22:55:21.070126 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:21 crc kubenswrapper[4979]: E0130 22:55:21.070977 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:33 crc kubenswrapper[4979]: I0130 22:55:33.070426 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:33 crc kubenswrapper[4979]: E0130 22:55:33.071775 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:47 crc kubenswrapper[4979]: I0130 22:55:47.070059 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:47 crc kubenswrapper[4979]: E0130 22:55:47.070827 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:00 crc kubenswrapper[4979]: I0130 22:56:00.069908 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:00 crc kubenswrapper[4979]: E0130 22:56:00.070563 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:14 crc kubenswrapper[4979]: I0130 22:56:14.070151 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:14 crc kubenswrapper[4979]: E0130 22:56:14.070808 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:28 crc kubenswrapper[4979]: I0130 22:56:28.069717 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:28 crc kubenswrapper[4979]: E0130 22:56:28.070728 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.231465 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.236999 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.354811 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:31 crc kubenswrapper[4979]: E0130 22:56:31.355414 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-utilities" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355498 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-utilities" Jan 30 22:56:31 crc kubenswrapper[4979]: E0130 22:56:31.355569 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355628 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" Jan 30 22:56:31 crc kubenswrapper[4979]: E0130 22:56:31.355704 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-content" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355759 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-content" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355928 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.356463 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.358435 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.358439 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.366483 4979 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-jpprx" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.366483 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.372638 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.377849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.377898 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.377938 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478438 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478466 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.479118 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.504049 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.682739 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:32 crc kubenswrapper[4979]: I0130 22:56:32.095272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:32 crc kubenswrapper[4979]: I0130 22:56:32.103390 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:56:32 crc kubenswrapper[4979]: I0130 22:56:32.411650 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bws8q" event={"ID":"5cfa1ab3-8375-406f-8337-8bf16b0eca15","Type":"ContainerStarted","Data":"e6433d25883518b82c9d988c509f16f512f8e37c7dee620c5b63b7ddcb930dc9"} Jan 30 22:56:33 crc kubenswrapper[4979]: I0130 22:56:33.080047 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b164f6-7e71-4403-9598-6673cea6876e" path="/var/lib/kubelet/pods/55b164f6-7e71-4403-9598-6673cea6876e/volumes" Jan 30 22:56:33 crc kubenswrapper[4979]: I0130 22:56:33.420467 4979 generic.go:334] "Generic (PLEG): container finished" podID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerID="f9b321201755262611e536dca11c7193aa5f320fa99f7da74aac970a57d934ef" exitCode=0 Jan 30 22:56:33 crc kubenswrapper[4979]: I0130 22:56:33.420563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bws8q" event={"ID":"5cfa1ab3-8375-406f-8337-8bf16b0eca15","Type":"ContainerDied","Data":"f9b321201755262611e536dca11c7193aa5f320fa99f7da74aac970a57d934ef"} Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.778440 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835282 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835429 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "5cfa1ab3-8375-406f-8337-8bf16b0eca15" (UID: "5cfa1ab3-8375-406f-8337-8bf16b0eca15"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835466 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835628 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.836019 4979 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.840886 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk" (OuterVolumeSpecName: "kube-api-access-q7htk") pod "5cfa1ab3-8375-406f-8337-8bf16b0eca15" (UID: "5cfa1ab3-8375-406f-8337-8bf16b0eca15"). InnerVolumeSpecName "kube-api-access-q7htk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.853018 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "5cfa1ab3-8375-406f-8337-8bf16b0eca15" (UID: "5cfa1ab3-8375-406f-8337-8bf16b0eca15"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.937462 4979 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.937499 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:35 crc kubenswrapper[4979]: I0130 22:56:35.444778 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bws8q" event={"ID":"5cfa1ab3-8375-406f-8337-8bf16b0eca15","Type":"ContainerDied","Data":"e6433d25883518b82c9d988c509f16f512f8e37c7dee620c5b63b7ddcb930dc9"} Jan 30 22:56:35 crc kubenswrapper[4979]: I0130 22:56:35.445299 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6433d25883518b82c9d988c509f16f512f8e37c7dee620c5b63b7ddcb930dc9" Jan 30 22:56:35 crc kubenswrapper[4979]: I0130 22:56:35.444850 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.851254 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.858063 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.966414 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-q6qv6"] Jan 30 22:56:36 crc kubenswrapper[4979]: E0130 22:56:36.966774 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerName="storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.966794 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerName="storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.966999 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerName="storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.967572 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.970395 4979 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-jpprx" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.971333 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.971575 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.973131 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.977808 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q6qv6"] Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.067449 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.067561 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.067629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.080093 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" path="/var/lib/kubelet/pods/5cfa1ab3-8375-406f-8337-8bf16b0eca15/volumes" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169536 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169860 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.170435 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.195382 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.292491 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.754778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q6qv6"] Jan 30 22:56:38 crc kubenswrapper[4979]: I0130 22:56:38.468495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q6qv6" event={"ID":"51e286a1-1a78-4074-83f5-967245b1c36a","Type":"ContainerStarted","Data":"fa5e251e45b390ee77b5b0149a7bf2c508aa1cbb4edd80741e9f9aecdfa56901"} Jan 30 22:56:39 crc kubenswrapper[4979]: I0130 22:56:39.479363 4979 generic.go:334] "Generic (PLEG): container finished" podID="51e286a1-1a78-4074-83f5-967245b1c36a" containerID="b532d569095e0ff5c9224950f17c01109c557dea11e198d44eead3dbf56c7594" exitCode=0 Jan 30 22:56:39 crc kubenswrapper[4979]: I0130 22:56:39.479464 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q6qv6" event={"ID":"51e286a1-1a78-4074-83f5-967245b1c36a","Type":"ContainerDied","Data":"b532d569095e0ff5c9224950f17c01109c557dea11e198d44eead3dbf56c7594"} Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.844852 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.935569 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"51e286a1-1a78-4074-83f5-967245b1c36a\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.935686 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"51e286a1-1a78-4074-83f5-967245b1c36a\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.935773 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"51e286a1-1a78-4074-83f5-967245b1c36a\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.936688 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "51e286a1-1a78-4074-83f5-967245b1c36a" (UID: "51e286a1-1a78-4074-83f5-967245b1c36a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.946558 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg" (OuterVolumeSpecName: "kube-api-access-knhbg") pod "51e286a1-1a78-4074-83f5-967245b1c36a" (UID: "51e286a1-1a78-4074-83f5-967245b1c36a"). InnerVolumeSpecName "kube-api-access-knhbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.971305 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "51e286a1-1a78-4074-83f5-967245b1c36a" (UID: "51e286a1-1a78-4074-83f5-967245b1c36a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.037742 4979 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.037785 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.037795 4979 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.069806 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:41 crc kubenswrapper[4979]: E0130 22:56:41.070351 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.501208 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q6qv6" event={"ID":"51e286a1-1a78-4074-83f5-967245b1c36a","Type":"ContainerDied","Data":"fa5e251e45b390ee77b5b0149a7bf2c508aa1cbb4edd80741e9f9aecdfa56901"} Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.501254 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5e251e45b390ee77b5b0149a7bf2c508aa1cbb4edd80741e9f9aecdfa56901" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.501339 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:54 crc kubenswrapper[4979]: I0130 22:56:54.069658 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:54 crc kubenswrapper[4979]: E0130 22:56:54.070549 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:57:07 crc kubenswrapper[4979]: I0130 22:57:07.069857 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:57:07 crc kubenswrapper[4979]: I0130 22:57:07.686464 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9"} Jan 30 22:57:25 crc kubenswrapper[4979]: I0130 22:57:25.785708 4979 scope.go:117] "RemoveContainer" containerID="f69e5e60ca65ac037198a7875cb73ae5dd60bb9ab12c82aead51159afd7e44ab" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.277127 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:32 crc kubenswrapper[4979]: E0130 22:57:32.278145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e286a1-1a78-4074-83f5-967245b1c36a" containerName="storage" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.278158 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e286a1-1a78-4074-83f5-967245b1c36a" containerName="storage" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.278328 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e286a1-1a78-4074-83f5-967245b1c36a" containerName="storage" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.289260 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.331636 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.422380 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.422907 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.422991 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524502 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524931 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.525051 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.547169 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.632640 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.126292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.897962 4979 generic.go:334] "Generic (PLEG): container finished" podID="623675c5-9919-4674-b268-95d143a04fee" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" exitCode=0 Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.898059 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744"} Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.898339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerStarted","Data":"c17c52095893b902e8ea8de1b64bac329fbf99d9059d027246fa472611bb55dc"} Jan 30 22:57:35 crc kubenswrapper[4979]: I0130 22:57:35.931921 4979 generic.go:334] "Generic (PLEG): container finished" podID="623675c5-9919-4674-b268-95d143a04fee" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" exitCode=0 Jan 30 22:57:35 crc kubenswrapper[4979]: I0130 22:57:35.932119 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c"} Jan 30 22:57:36 crc kubenswrapper[4979]: I0130 22:57:36.940468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerStarted","Data":"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5"} Jan 30 22:57:36 crc kubenswrapper[4979]: I0130 22:57:36.962147 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5ffgp" podStartSLOduration=2.501053168 podStartE2EDuration="4.96212849s" podCreationTimestamp="2026-01-30 22:57:32 +0000 UTC" firstStartedPulling="2026-01-30 22:57:33.900181568 +0000 UTC m=+4649.861428631" lastFinishedPulling="2026-01-30 22:57:36.36125692 +0000 UTC m=+4652.322503953" observedRunningTime="2026-01-30 22:57:36.958400319 +0000 UTC m=+4652.919647372" watchObservedRunningTime="2026-01-30 22:57:36.96212849 +0000 UTC m=+4652.923375523" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.361622 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.365497 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.380454 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.474682 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.474826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.475146 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.576822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.576889 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.576954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.577714 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.577772 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.603422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.694693 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.168215 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.633666 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.633999 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.674440 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.994974 4979 generic.go:334] "Generic (PLEG): container finished" podID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" exitCode=0 Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.995018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995"} Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.995090 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerStarted","Data":"637798bf8cb0e9717c3ac1817083cac1bf20c9222da9c74a0b8b70e0c5201c1c"} Jan 30 22:57:43 crc kubenswrapper[4979]: I0130 22:57:43.040609 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:44 crc kubenswrapper[4979]: I0130 22:57:44.004176 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerStarted","Data":"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67"} Jan 30 22:57:44 crc kubenswrapper[4979]: I0130 22:57:44.937307 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.015789 4979 generic.go:334] "Generic (PLEG): container finished" podID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" exitCode=0 Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.015856 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67"} Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.016070 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5ffgp" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" containerID="cri-o://7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" gracePeriod=2 Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.726343 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.838698 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"623675c5-9919-4674-b268-95d143a04fee\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.838762 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"623675c5-9919-4674-b268-95d143a04fee\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.838820 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"623675c5-9919-4674-b268-95d143a04fee\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.839631 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities" (OuterVolumeSpecName: "utilities") pod "623675c5-9919-4674-b268-95d143a04fee" (UID: "623675c5-9919-4674-b268-95d143a04fee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.844751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw" (OuterVolumeSpecName: "kube-api-access-4ccnw") pod "623675c5-9919-4674-b268-95d143a04fee" (UID: "623675c5-9919-4674-b268-95d143a04fee"). InnerVolumeSpecName "kube-api-access-4ccnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.941245 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.941271 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.023017 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerStarted","Data":"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f"} Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024644 4979 generic.go:334] "Generic (PLEG): container finished" podID="623675c5-9919-4674-b268-95d143a04fee" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" exitCode=0 Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024692 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5"} Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"c17c52095893b902e8ea8de1b64bac329fbf99d9059d027246fa472611bb55dc"} Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024743 4979 scope.go:117] "RemoveContainer" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024782 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.040980 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xvvr4" podStartSLOduration=2.331165105 podStartE2EDuration="5.040967064s" podCreationTimestamp="2026-01-30 22:57:41 +0000 UTC" firstStartedPulling="2026-01-30 22:57:42.997138272 +0000 UTC m=+4658.958385305" lastFinishedPulling="2026-01-30 22:57:45.706940221 +0000 UTC m=+4661.668187264" observedRunningTime="2026-01-30 22:57:46.038646671 +0000 UTC m=+4661.999893704" watchObservedRunningTime="2026-01-30 22:57:46.040967064 +0000 UTC m=+4662.002214097" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.059284 4979 scope.go:117] "RemoveContainer" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.076124 4979 scope.go:117] "RemoveContainer" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.088575 4979 scope.go:117] "RemoveContainer" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" Jan 30 22:57:46 crc kubenswrapper[4979]: E0130 22:57:46.088940 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5\": container with ID starting with 7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5 not found: ID does not exist" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.088995 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5"} err="failed to get container status \"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5\": rpc error: code = NotFound desc = could not find container \"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5\": container with ID starting with 7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5 not found: ID does not exist" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089049 4979 scope.go:117] "RemoveContainer" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" Jan 30 22:57:46 crc kubenswrapper[4979]: E0130 22:57:46.089334 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c\": container with ID starting with 91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c not found: ID does not exist" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089370 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c"} err="failed to get container status \"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c\": rpc error: code = NotFound desc = could not find container \"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c\": container with ID starting with 91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c not found: ID does not exist" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089390 4979 scope.go:117] "RemoveContainer" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" Jan 30 22:57:46 crc kubenswrapper[4979]: E0130 22:57:46.089618 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744\": container with ID starting with 2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744 not found: ID does not exist" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089647 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744"} err="failed to get container status \"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744\": rpc error: code = NotFound desc = could not find container \"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744\": container with ID starting with 2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744 not found: ID does not exist" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.219425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "623675c5-9919-4674-b268-95d143a04fee" (UID: "623675c5-9919-4674-b268-95d143a04fee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.247350 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.354818 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.360227 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:47 crc kubenswrapper[4979]: I0130 22:57:47.078642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623675c5-9919-4674-b268-95d143a04fee" path="/var/lib/kubelet/pods/623675c5-9919-4674-b268-95d143a04fee/volumes" Jan 30 22:57:51 crc kubenswrapper[4979]: I0130 22:57:51.695606 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:51 crc kubenswrapper[4979]: I0130 22:57:51.695948 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:51 crc kubenswrapper[4979]: I0130 22:57:51.759915 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:52 crc kubenswrapper[4979]: I0130 22:57:52.131784 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:52 crc kubenswrapper[4979]: I0130 22:57:52.181903 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.094799 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xvvr4" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" containerID="cri-o://7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" gracePeriod=2 Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.526392 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.582622 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.582727 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.582811 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.584426 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities" (OuterVolumeSpecName: "utilities") pod "97674aa1-34d3-4bb3-a4f5-31af8b1138c4" (UID: "97674aa1-34d3-4bb3-a4f5-31af8b1138c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.589946 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql" (OuterVolumeSpecName: "kube-api-access-rkdql") pod "97674aa1-34d3-4bb3-a4f5-31af8b1138c4" (UID: "97674aa1-34d3-4bb3-a4f5-31af8b1138c4"). InnerVolumeSpecName "kube-api-access-rkdql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.634416 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97674aa1-34d3-4bb3-a4f5-31af8b1138c4" (UID: "97674aa1-34d3-4bb3-a4f5-31af8b1138c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.684465 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.684499 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.684509 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102748 4979 generic.go:334] "Generic (PLEG): container finished" podID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" exitCode=0 Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f"} Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102843 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"637798bf8cb0e9717c3ac1817083cac1bf20c9222da9c74a0b8b70e0c5201c1c"} Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102846 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102862 4979 scope.go:117] "RemoveContainer" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.124990 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.131292 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.136214 4979 scope.go:117] "RemoveContainer" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.167011 4979 scope.go:117] "RemoveContainer" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.202098 4979 scope.go:117] "RemoveContainer" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" Jan 30 22:57:55 crc kubenswrapper[4979]: E0130 22:57:55.202745 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f\": container with ID starting with 7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f not found: ID does not exist" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.202823 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f"} err="failed to get container status \"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f\": rpc error: code = NotFound desc = could not find container \"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f\": container with ID starting with 7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f not found: ID does not exist" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.202857 4979 scope.go:117] "RemoveContainer" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" Jan 30 22:57:55 crc kubenswrapper[4979]: E0130 22:57:55.203355 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67\": container with ID starting with 6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67 not found: ID does not exist" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.203410 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67"} err="failed to get container status \"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67\": rpc error: code = NotFound desc = could not find container \"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67\": container with ID starting with 6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67 not found: ID does not exist" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.203438 4979 scope.go:117] "RemoveContainer" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" Jan 30 22:57:55 crc kubenswrapper[4979]: E0130 22:57:55.203920 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995\": container with ID starting with 992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995 not found: ID does not exist" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.203987 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995"} err="failed to get container status \"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995\": rpc error: code = NotFound desc = could not find container \"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995\": container with ID starting with 992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995 not found: ID does not exist" Jan 30 22:57:57 crc kubenswrapper[4979]: I0130 22:57:57.084894 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" path="/var/lib/kubelet/pods/97674aa1-34d3-4bb3-a4f5-31af8b1138c4/volumes" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.488018 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.492519 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.492738 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.492844 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.492980 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493121 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493210 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493300 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493387 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493484 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493794 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493901 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493993 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.494315 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.494453 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.495707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.503718 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.525151 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.525325 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.525356 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.627886 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.627983 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.628128 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.628431 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.629021 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.647209 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.815422 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.265705 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.504096 4979 generic.go:334] "Generic (PLEG): container finished" podID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerID="5d7c479a9e141b7e7a00eb0439d0f66d01bd5fba7f1b04c726e4be2b19adc583" exitCode=0 Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.504175 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"5d7c479a9e141b7e7a00eb0439d0f66d01bd5fba7f1b04c726e4be2b19adc583"} Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.504221 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerStarted","Data":"5b94e80ea9248f79b7959c6c9c8e88281a22d40693a65524aab21567090ee50c"} Jan 30 22:58:44 crc kubenswrapper[4979]: I0130 22:58:44.519362 4979 generic.go:334] "Generic (PLEG): container finished" podID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerID="c083d9958969dba5413db9bda4338a29832e0b8f64a3b09ee91958c62054a311" exitCode=0 Jan 30 22:58:44 crc kubenswrapper[4979]: I0130 22:58:44.519597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"c083d9958969dba5413db9bda4338a29832e0b8f64a3b09ee91958c62054a311"} Jan 30 22:58:45 crc kubenswrapper[4979]: I0130 22:58:45.530219 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerStarted","Data":"9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35"} Jan 30 22:58:45 crc kubenswrapper[4979]: I0130 22:58:45.549647 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsvx6" podStartSLOduration=2.136289782 podStartE2EDuration="4.549629719s" podCreationTimestamp="2026-01-30 22:58:41 +0000 UTC" firstStartedPulling="2026-01-30 22:58:42.50590784 +0000 UTC m=+4718.467154873" lastFinishedPulling="2026-01-30 22:58:44.919247777 +0000 UTC m=+4720.880494810" observedRunningTime="2026-01-30 22:58:45.547764538 +0000 UTC m=+4721.509011571" watchObservedRunningTime="2026-01-30 22:58:45.549629719 +0000 UTC m=+4721.510876752" Jan 30 22:58:51 crc kubenswrapper[4979]: I0130 22:58:51.816467 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:51 crc kubenswrapper[4979]: I0130 22:58:51.817517 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:51 crc kubenswrapper[4979]: I0130 22:58:51.895898 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:52 crc kubenswrapper[4979]: I0130 22:58:52.618831 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:52 crc kubenswrapper[4979]: I0130 22:58:52.672715 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:54 crc kubenswrapper[4979]: I0130 22:58:54.927633 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsvx6" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" containerID="cri-o://9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35" gracePeriod=2 Jan 30 22:58:55 crc kubenswrapper[4979]: I0130 22:58:55.935840 4979 generic.go:334] "Generic (PLEG): container finished" podID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerID="9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35" exitCode=0 Jan 30 22:58:55 crc kubenswrapper[4979]: I0130 22:58:55.936072 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35"} Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.394804 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.543064 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"c6e711e0-7edf-438f-b03e-5e8f786c3737\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.543153 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"c6e711e0-7edf-438f-b03e-5e8f786c3737\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.543247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"c6e711e0-7edf-438f-b03e-5e8f786c3737\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.544172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities" (OuterVolumeSpecName: "utilities") pod "c6e711e0-7edf-438f-b03e-5e8f786c3737" (UID: "c6e711e0-7edf-438f-b03e-5e8f786c3737"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.548738 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9" (OuterVolumeSpecName: "kube-api-access-tfhp9") pod "c6e711e0-7edf-438f-b03e-5e8f786c3737" (UID: "c6e711e0-7edf-438f-b03e-5e8f786c3737"). InnerVolumeSpecName "kube-api-access-tfhp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.644894 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.644995 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") on node \"crc\" DevicePath \"\"" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.703452 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6e711e0-7edf-438f-b03e-5e8f786c3737" (UID: "c6e711e0-7edf-438f-b03e-5e8f786c3737"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.746721 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.946940 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"5b94e80ea9248f79b7959c6c9c8e88281a22d40693a65524aab21567090ee50c"} Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.946992 4979 scope.go:117] "RemoveContainer" containerID="9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.947208 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.973332 4979 scope.go:117] "RemoveContainer" containerID="c083d9958969dba5413db9bda4338a29832e0b8f64a3b09ee91958c62054a311" Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.006976 4979 scope.go:117] "RemoveContainer" containerID="5d7c479a9e141b7e7a00eb0439d0f66d01bd5fba7f1b04c726e4be2b19adc583" Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.024043 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.033468 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.086156 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" path="/var/lib/kubelet/pods/c6e711e0-7edf-438f-b03e-5e8f786c3737/volumes" Jan 30 22:59:32 crc kubenswrapper[4979]: I0130 22:59:32.039977 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:59:32 crc kubenswrapper[4979]: I0130 22:59:32.040619 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.700079 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 22:59:59 crc kubenswrapper[4979]: E0130 22:59:59.701173 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-content" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701191 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-content" Jan 30 22:59:59 crc kubenswrapper[4979]: E0130 22:59:59.701228 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701235 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" Jan 30 22:59:59 crc kubenswrapper[4979]: E0130 22:59:59.701250 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-utilities" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701258 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-utilities" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701419 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.702336 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.705590 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.709318 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.710242 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.710322 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.710827 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-spztd" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.723583 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.812375 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.812434 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.812524 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.913142 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.913230 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.913253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.914170 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.914219 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.944730 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.985008 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.986193 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.019377 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.046100 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.118316 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.118673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.118695 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.142083 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.142930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.149672 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.149915 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.154265 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221560 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221593 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221630 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.222528 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.223466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.243561 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.303073 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.323265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.323310 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.323365 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.324552 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.327225 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.339576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.413094 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:00 crc kubenswrapper[4979]: W0130 23:00:00.448908 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd04dc18f_4a9e_40c5_89af_d1a090d55f19.slice/crio-3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740 WatchSource:0}: Error finding container 3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740: Status 404 returned error can't find the container with id 3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740 Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.465307 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerStarted","Data":"3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740"} Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.467530 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.768935 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.863760 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.866379 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.871944 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.872135 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.872247 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.872361 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-25ft5" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.873366 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.877492 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.955695 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z"] Jan 30 23:00:00 crc kubenswrapper[4979]: W0130 23:00:00.964096 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d53401_2853_4ace_84c5_621db486afe4.slice/crio-8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3 WatchSource:0}: Error finding container 8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3: Status 404 returned error can't find the container with id 8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038650 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038683 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038707 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038747 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039013 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039116 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140525 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140620 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140707 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140742 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140785 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.141474 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.142300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.142603 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.143003 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.147340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.147461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.148552 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.148621 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d01eb65d8adc4d32a35137e4c958b2a45829d9b744b41c2b35ba94851c4723/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.152660 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.160899 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.189069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.205652 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.216721 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.236268 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240398 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240652 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240735 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240850 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.241211 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vcf7n" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343582 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343867 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343885 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343973 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.344013 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.344048 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445749 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445805 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445829 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445887 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445907 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.446512 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.446658 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.447883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.448501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451162 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451771 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451804 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6649be050b7f075ba9ae655c5497b53ee628ceded131093e643c8c774a634b05/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451971 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.463779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.478814 4979 generic.go:334] "Generic (PLEG): container finished" podID="b9d53401-2853-4ace-84c5-621db486afe4" containerID="21e6285a2d48c55e292d8fabf4f8ed164cdad4a9a3d4934a322f2d44ce65e551" exitCode=0 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.478910 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" event={"ID":"b9d53401-2853-4ace-84c5-621db486afe4","Type":"ContainerDied","Data":"21e6285a2d48c55e292d8fabf4f8ed164cdad4a9a3d4934a322f2d44ce65e551"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.478938 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" event={"ID":"b9d53401-2853-4ace-84c5-621db486afe4","Type":"ContainerStarted","Data":"8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.480642 4979 generic.go:334] "Generic (PLEG): container finished" podID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerID="e92999cfafdeac8211d5158e0746bbde23f4c02a545a5abc5507e1fcf7782d7c" exitCode=0 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.480751 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerDied","Data":"e92999cfafdeac8211d5158e0746bbde23f4c02a545a5abc5507e1fcf7782d7c"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.480800 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerStarted","Data":"994ef8ca363dd40c266610987d3ec533707b724f9ddcc04659cbb378e0bcd6ba"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.481140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.483191 4979 generic.go:334] "Generic (PLEG): container finished" podID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" exitCode=0 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.483217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerDied","Data":"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.550794 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.564850 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.040145 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.040486 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.060980 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: W0130 23:00:02.066985 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdc62fc_7d4d_4f2a_9611_4011f302320a.slice/crio-77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc WatchSource:0}: Error finding container 77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc: Status 404 returned error can't find the container with id 77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.385304 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.387530 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.390409 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q74gm" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.391348 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.391495 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.391696 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.404681 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.404969 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.494346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerStarted","Data":"41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.494689 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.497266 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerStarted","Data":"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.497527 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.498974 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerStarted","Data":"a401bb2823a23f538ee4aeaa4f20fe114ad58881d1e7c333038a2c3b643757b1"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.500764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerStarted","Data":"77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.554112 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" podStartSLOduration=3.554084926 podStartE2EDuration="3.554084926s" podCreationTimestamp="2026-01-30 22:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:02.53936805 +0000 UTC m=+4798.500615083" watchObservedRunningTime="2026-01-30 23:00:02.554084926 +0000 UTC m=+4798.515331969" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.564941 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.565543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-kolla-config\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.566501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.566856 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.567065 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.583367 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpkcj\" (UniqueName: \"kubernetes.io/projected/20a89776-fed1-4db4-80e6-11cfdb8f810b-kube-api-access-vpkcj\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.583521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.583584 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-default\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.585978 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" podStartSLOduration=3.585953053 podStartE2EDuration="3.585953053s" podCreationTimestamp="2026-01-30 22:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:02.585572823 +0000 UTC m=+4798.546819866" watchObservedRunningTime="2026-01-30 23:00:02.585953053 +0000 UTC m=+4798.547200096" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685797 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685876 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-default\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685920 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-kolla-config\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686002 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686055 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686128 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpkcj\" (UniqueName: \"kubernetes.io/projected/20a89776-fed1-4db4-80e6-11cfdb8f810b-kube-api-access-vpkcj\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686980 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-kolla-config\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.687666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.688358 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-default\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.689233 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.707841 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.708996 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.709153 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78080cf084e20dcd3b8a9006ba9106db7dae3f598d9b707e2876adfc2da03006/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.715015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.717258 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpkcj\" (UniqueName: \"kubernetes.io/projected/20a89776-fed1-4db4-80e6-11cfdb8f810b-kube-api-access-vpkcj\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.822392 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.828949 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.835758 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tj5p2" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.835956 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.854317 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.002851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfbtn\" (UniqueName: \"kubernetes.io/projected/4a63b89d-496c-4f6e-8ba3-a18de60230af-kube-api-access-hfbtn\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.002930 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-kolla-config\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.003000 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-config-data\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.017798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.049578 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.106361 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfbtn\" (UniqueName: \"kubernetes.io/projected/4a63b89d-496c-4f6e-8ba3-a18de60230af-kube-api-access-hfbtn\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.106441 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-kolla-config\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.106508 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-config-data\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.108248 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-config-data\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.108614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-kolla-config\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.130102 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfbtn\" (UniqueName: \"kubernetes.io/projected/4a63b89d-496c-4f6e-8ba3-a18de60230af-kube-api-access-hfbtn\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.155224 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.208122 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"b9d53401-2853-4ace-84c5-621db486afe4\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.208391 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"b9d53401-2853-4ace-84c5-621db486afe4\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.208433 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"b9d53401-2853-4ace-84c5-621db486afe4\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.210390 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9d53401-2853-4ace-84c5-621db486afe4" (UID: "b9d53401-2853-4ace-84c5-621db486afe4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.215184 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b9d53401-2853-4ace-84c5-621db486afe4" (UID: "b9d53401-2853-4ace-84c5-621db486afe4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.215620 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk" (OuterVolumeSpecName: "kube-api-access-dtrtk") pod "b9d53401-2853-4ace-84c5-621db486afe4" (UID: "b9d53401-2853-4ace-84c5-621db486afe4"). InnerVolumeSpecName "kube-api-access-dtrtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.307721 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.310767 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.310787 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.310797 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.395938 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.514808 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4a63b89d-496c-4f6e-8ba3-a18de60230af","Type":"ContainerStarted","Data":"de144da18fc6ad1e6da77fb3483d80cc8a6e01414222c71e9a594243e56e42c3"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.516937 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.517229 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" event={"ID":"b9d53401-2853-4ace-84c5-621db486afe4","Type":"ContainerDied","Data":"8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.517277 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.518482 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerStarted","Data":"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.520100 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerStarted","Data":"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.786400 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: W0130 23:00:03.791664 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a89776_fed1_4db4_80e6_11cfdb8f810b.slice/crio-15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8 WatchSource:0}: Error finding container 15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8: Status 404 returned error can't find the container with id 15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8 Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.959819 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: E0130 23:00:03.960135 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d53401-2853-4ace-84c5-621db486afe4" containerName="collect-profiles" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.960153 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d53401-2853-4ace-84c5-621db486afe4" containerName="collect-profiles" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.960310 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d53401-2853-4ace-84c5-621db486afe4" containerName="collect-profiles" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.961046 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964266 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964583 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964797 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8dl6r" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964920 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.977203 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.113203 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.117744 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120229 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6s4\" (UniqueName: \"kubernetes.io/projected/7dad08bf-c93b-417a-aeef-633e774fffcc-kube-api-access-sp6s4\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120266 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120295 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120326 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120401 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120436 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120454 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222494 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222515 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222556 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6s4\" (UniqueName: \"kubernetes.io/projected/7dad08bf-c93b-417a-aeef-633e774fffcc-kube-api-access-sp6s4\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222577 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222617 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.224216 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.224449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.224999 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.225415 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.225795 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.225820 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/765b5e341727e84e0cabb13f32abb0e1618fe43994bb507754efecb61045210c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.230475 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.232961 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.243628 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6s4\" (UniqueName: \"kubernetes.io/projected/7dad08bf-c93b-417a-aeef-633e774fffcc-kube-api-access-sp6s4\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.256542 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.321302 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.530104 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4a63b89d-496c-4f6e-8ba3-a18de60230af","Type":"ContainerStarted","Data":"848bbed8593996688ca0963310f58dd25f1a5ce8fa6b6b4bfd577b5bbec4167d"} Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.530493 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.533590 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerStarted","Data":"7c9333f0e52c67c6d628c18c4ab9c918def259767c8171fa934338c1261af124"} Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.533645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerStarted","Data":"15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8"} Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.562429 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.562408802 podStartE2EDuration="2.562408802s" podCreationTimestamp="2026-01-30 23:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:04.558475576 +0000 UTC m=+4800.519722619" watchObservedRunningTime="2026-01-30 23:00:04.562408802 +0000 UTC m=+4800.523655835" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.745182 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 23:00:04 crc kubenswrapper[4979]: W0130 23:00:04.749950 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dad08bf_c93b_417a_aeef_633e774fffcc.slice/crio-8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569 WatchSource:0}: Error finding container 8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569: Status 404 returned error can't find the container with id 8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569 Jan 30 23:00:05 crc kubenswrapper[4979]: I0130 23:00:05.084208 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" path="/var/lib/kubelet/pods/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8/volumes" Jan 30 23:00:05 crc kubenswrapper[4979]: I0130 23:00:05.542272 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerStarted","Data":"f1b4f2b54268550994dacc30fe9d7d0a5581aad8d3a5f1469657727a62239b36"} Jan 30 23:00:05 crc kubenswrapper[4979]: I0130 23:00:05.542335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerStarted","Data":"8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569"} Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.156878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.563367 4979 generic.go:334] "Generic (PLEG): container finished" podID="7dad08bf-c93b-417a-aeef-633e774fffcc" containerID="f1b4f2b54268550994dacc30fe9d7d0a5581aad8d3a5f1469657727a62239b36" exitCode=0 Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.563489 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerDied","Data":"f1b4f2b54268550994dacc30fe9d7d0a5581aad8d3a5f1469657727a62239b36"} Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.565139 4979 generic.go:334] "Generic (PLEG): container finished" podID="20a89776-fed1-4db4-80e6-11cfdb8f810b" containerID="7c9333f0e52c67c6d628c18c4ab9c918def259767c8171fa934338c1261af124" exitCode=0 Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.565195 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerDied","Data":"7c9333f0e52c67c6d628c18c4ab9c918def259767c8171fa934338c1261af124"} Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.576627 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerStarted","Data":"6e0298f58e58526dc18bb81924423207d305cab9bac291d9db544286fae44088"} Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.580976 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerStarted","Data":"dd49143ef0315529344db293049bb4fd2535a07a43cd4e524a6e26284e67964c"} Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.632918 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.632899443 podStartE2EDuration="7.632899443s" podCreationTimestamp="2026-01-30 23:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:09.604692814 +0000 UTC m=+4805.565939947" watchObservedRunningTime="2026-01-30 23:00:09.632899443 +0000 UTC m=+4805.594146476" Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.634009 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.634004333 podStartE2EDuration="8.634004333s" podCreationTimestamp="2026-01-30 23:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:09.629392298 +0000 UTC m=+4805.590639331" watchObservedRunningTime="2026-01-30 23:00:09.634004333 +0000 UTC m=+4805.595251366" Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.021395 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.305261 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.375426 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.587769 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" containerID="cri-o://ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" gracePeriod=10 Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.068624 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.171497 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.171566 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.171732 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.183962 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch" (OuterVolumeSpecName: "kube-api-access-tcsch") pod "d04dc18f-4a9e-40c5-89af-d1a090d55f19" (UID: "d04dc18f-4a9e-40c5-89af-d1a090d55f19"). InnerVolumeSpecName "kube-api-access-tcsch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.209952 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config" (OuterVolumeSpecName: "config") pod "d04dc18f-4a9e-40c5-89af-d1a090d55f19" (UID: "d04dc18f-4a9e-40c5-89af-d1a090d55f19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.212533 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d04dc18f-4a9e-40c5-89af-d1a090d55f19" (UID: "d04dc18f-4a9e-40c5-89af-d1a090d55f19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.273185 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.273236 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.273245 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.606875 4979 generic.go:334] "Generic (PLEG): container finished" podID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" exitCode=0 Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.606983 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerDied","Data":"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef"} Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.606999 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.608298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerDied","Data":"3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740"} Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.608333 4979 scope.go:117] "RemoveContainer" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.651941 4979 scope.go:117] "RemoveContainer" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.652958 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.657915 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.672473 4979 scope.go:117] "RemoveContainer" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" Jan 30 23:00:11 crc kubenswrapper[4979]: E0130 23:00:11.673275 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef\": container with ID starting with ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef not found: ID does not exist" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.673304 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef"} err="failed to get container status \"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef\": rpc error: code = NotFound desc = could not find container \"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef\": container with ID starting with ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef not found: ID does not exist" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.673326 4979 scope.go:117] "RemoveContainer" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" Jan 30 23:00:11 crc kubenswrapper[4979]: E0130 23:00:11.673762 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6\": container with ID starting with e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6 not found: ID does not exist" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.673782 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6"} err="failed to get container status \"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6\": rpc error: code = NotFound desc = could not find container \"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6\": container with ID starting with e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6 not found: ID does not exist" Jan 30 23:00:12 crc kubenswrapper[4979]: E0130 23:00:12.203211 4979 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.143:57152->38.102.83.143:38353: read tcp 38.102.83.143:57152->38.102.83.143:38353: read: connection reset by peer Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.087345 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" path="/var/lib/kubelet/pods/d04dc18f-4a9e-40c5-89af-d1a090d55f19/volumes" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.309172 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.309376 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.411799 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.717133 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 23:00:14 crc kubenswrapper[4979]: I0130 23:00:14.321847 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:14 crc kubenswrapper[4979]: I0130 23:00:14.321890 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:16 crc kubenswrapper[4979]: I0130 23:00:16.633216 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:16 crc kubenswrapper[4979]: I0130 23:00:16.703869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.330627 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:21 crc kubenswrapper[4979]: E0130 23:00:21.331556 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="init" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.331586 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="init" Jan 30 23:00:21 crc kubenswrapper[4979]: E0130 23:00:21.331621 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.331634 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.331892 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.332998 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.335882 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.346699 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.438432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.438499 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.540880 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.540957 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.541985 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.559421 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.649712 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.149188 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:22 crc kubenswrapper[4979]: W0130 23:00:22.153192 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e6c86a3_68af_49ab_9829_7bbe8fc0b0ba.slice/crio-6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d WatchSource:0}: Error finding container 6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d: Status 404 returned error can't find the container with id 6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.700176 4979 generic.go:334] "Generic (PLEG): container finished" podID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerID="cf41beecee7e20e5f9c898a20d24e379b95780b90e8224537101891074538c0d" exitCode=0 Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.700242 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pzcvl" event={"ID":"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba","Type":"ContainerDied","Data":"cf41beecee7e20e5f9c898a20d24e379b95780b90e8224537101891074538c0d"} Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.700562 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pzcvl" event={"ID":"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba","Type":"ContainerStarted","Data":"6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d"} Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.054471 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.191500 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.192429 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.193191 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" (UID: "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.193639 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.198324 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv" (OuterVolumeSpecName: "kube-api-access-ltdqv") pod "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" (UID: "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba"). InnerVolumeSpecName "kube-api-access-ltdqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.295940 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.718383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pzcvl" event={"ID":"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba","Type":"ContainerDied","Data":"6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d"} Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.718440 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.718499 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:26 crc kubenswrapper[4979]: I0130 23:00:26.023223 4979 scope.go:117] "RemoveContainer" containerID="3bbe88baa1620c36ba12ba04d5a8542170b476b0b0988530b1848eeba6a89780" Jan 30 23:00:27 crc kubenswrapper[4979]: I0130 23:00:27.952290 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:27 crc kubenswrapper[4979]: I0130 23:00:27.961136 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:29 crc kubenswrapper[4979]: I0130 23:00:29.090923 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" path="/var/lib/kubelet/pods/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba/volumes" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.671777 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:30 crc kubenswrapper[4979]: E0130 23:00:30.673378 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerName="mariadb-account-create-update" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.673595 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerName="mariadb-account-create-update" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.673882 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerName="mariadb-account-create-update" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.694764 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.699672 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.809010 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.809113 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.809227 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.910587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.910695 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.910784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.911069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.911212 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.943627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.039918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.502699 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.776927 4979 generic.go:334] "Generic (PLEG): container finished" podID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" exitCode=0 Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.777007 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda"} Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.777473 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerStarted","Data":"3b4bbbd2f5f8d605423ee2168dd3e0a963546e4d187eae49e56f1c75e10714ce"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.039549 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.039648 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.039702 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.040439 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.040497 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9" gracePeriod=600 Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.789504 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9" exitCode=0 Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.789559 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.792055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.792134 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.795483 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerStarted","Data":"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.964957 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.966129 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.970405 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.010397 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.060067 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.060118 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.161772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.161825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.162897 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.182585 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.317900 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.741922 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.803487 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ntfjw" event={"ID":"579619ae-df83-40ff-8580-331060c16faf","Type":"ContainerStarted","Data":"e18d7827abd6930f15f3cea2c523dd9b6d4d86406411ab95da3f4fd12a3f6447"} Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.811651 4979 generic.go:334] "Generic (PLEG): container finished" podID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" exitCode=0 Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.811696 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4"} Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.811723 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerStarted","Data":"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef"} Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.836469 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b72zd" podStartSLOduration=2.121259997 podStartE2EDuration="3.83644787s" podCreationTimestamp="2026-01-30 23:00:30 +0000 UTC" firstStartedPulling="2026-01-30 23:00:31.7796045 +0000 UTC m=+4827.740851533" lastFinishedPulling="2026-01-30 23:00:33.494792383 +0000 UTC m=+4829.456039406" observedRunningTime="2026-01-30 23:00:33.835730471 +0000 UTC m=+4829.796977504" watchObservedRunningTime="2026-01-30 23:00:33.83644787 +0000 UTC m=+4829.797694903" Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.820834 4979 generic.go:334] "Generic (PLEG): container finished" podID="579619ae-df83-40ff-8580-331060c16faf" containerID="2d0a143830dd73a91f1cb09ef9f3967be5ae0e4eb61c252cb0405d7e7fe00ec4" exitCode=0 Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.820910 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ntfjw" event={"ID":"579619ae-df83-40ff-8580-331060c16faf","Type":"ContainerDied","Data":"2d0a143830dd73a91f1cb09ef9f3967be5ae0e4eb61c252cb0405d7e7fe00ec4"} Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.822987 4979 generic.go:334] "Generic (PLEG): container finished" podID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" exitCode=0 Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.823094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerDied","Data":"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69"} Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.832590 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerStarted","Data":"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398"} Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.833527 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.834283 4979 generic.go:334] "Generic (PLEG): container finished" podID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" exitCode=0 Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.834382 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerDied","Data":"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5"} Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.858531 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.858496116 podStartE2EDuration="36.858496116s" podCreationTimestamp="2026-01-30 22:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:35.852927926 +0000 UTC m=+4831.814174959" watchObservedRunningTime="2026-01-30 23:00:35.858496116 +0000 UTC m=+4831.819743199" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.147379 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.247421 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"579619ae-df83-40ff-8580-331060c16faf\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.247470 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"579619ae-df83-40ff-8580-331060c16faf\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.248164 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "579619ae-df83-40ff-8580-331060c16faf" (UID: "579619ae-df83-40ff-8580-331060c16faf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.252465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst" (OuterVolumeSpecName: "kube-api-access-2lhst") pod "579619ae-df83-40ff-8580-331060c16faf" (UID: "579619ae-df83-40ff-8580-331060c16faf"). InnerVolumeSpecName "kube-api-access-2lhst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.349093 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.349130 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.845843 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerStarted","Data":"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea"} Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.846593 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.848335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ntfjw" event={"ID":"579619ae-df83-40ff-8580-331060c16faf","Type":"ContainerDied","Data":"e18d7827abd6930f15f3cea2c523dd9b6d4d86406411ab95da3f4fd12a3f6447"} Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.848366 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18d7827abd6930f15f3cea2c523dd9b6d4d86406411ab95da3f4fd12a3f6447" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.848406 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.878827 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.878803412 podStartE2EDuration="36.878803412s" podCreationTimestamp="2026-01-30 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:36.877439836 +0000 UTC m=+4832.838686869" watchObservedRunningTime="2026-01-30 23:00:36.878803412 +0000 UTC m=+4832.840050445" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.040680 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.042241 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.104514 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.967287 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:42 crc kubenswrapper[4979]: I0130 23:00:42.021896 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:43 crc kubenswrapper[4979]: I0130 23:00:43.918148 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b72zd" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" containerID="cri-o://74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" gracePeriod=2 Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.373171 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.477575 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.477645 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.477779 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.480225 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities" (OuterVolumeSpecName: "utilities") pod "76e9b909-c2fa-4a2c-b161-6ee2436ce983" (UID: "76e9b909-c2fa-4a2c-b161-6ee2436ce983"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.483274 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6" (OuterVolumeSpecName: "kube-api-access-5z2k6") pod "76e9b909-c2fa-4a2c-b161-6ee2436ce983" (UID: "76e9b909-c2fa-4a2c-b161-6ee2436ce983"). InnerVolumeSpecName "kube-api-access-5z2k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.511237 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76e9b909-c2fa-4a2c-b161-6ee2436ce983" (UID: "76e9b909-c2fa-4a2c-b161-6ee2436ce983"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.579815 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.579854 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.579866 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928679 4979 generic.go:334] "Generic (PLEG): container finished" podID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" exitCode=0 Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928760 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef"} Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928804 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928833 4979 scope.go:117] "RemoveContainer" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928811 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"3b4bbbd2f5f8d605423ee2168dd3e0a963546e4d187eae49e56f1c75e10714ce"} Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.956568 4979 scope.go:117] "RemoveContainer" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.986348 4979 scope.go:117] "RemoveContainer" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.989066 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.998288 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.026390 4979 scope.go:117] "RemoveContainer" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" Jan 30 23:00:45 crc kubenswrapper[4979]: E0130 23:00:45.027217 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef\": container with ID starting with 74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef not found: ID does not exist" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027293 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef"} err="failed to get container status \"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef\": rpc error: code = NotFound desc = could not find container \"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef\": container with ID starting with 74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef not found: ID does not exist" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027337 4979 scope.go:117] "RemoveContainer" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" Jan 30 23:00:45 crc kubenswrapper[4979]: E0130 23:00:45.027793 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4\": container with ID starting with e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4 not found: ID does not exist" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027826 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4"} err="failed to get container status \"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4\": rpc error: code = NotFound desc = could not find container \"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4\": container with ID starting with e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4 not found: ID does not exist" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027847 4979 scope.go:117] "RemoveContainer" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" Jan 30 23:00:45 crc kubenswrapper[4979]: E0130 23:00:45.028131 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda\": container with ID starting with f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda not found: ID does not exist" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.028149 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda"} err="failed to get container status \"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda\": rpc error: code = NotFound desc = could not find container \"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda\": container with ID starting with f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda not found: ID does not exist" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.086683 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" path="/var/lib/kubelet/pods/76e9b909-c2fa-4a2c-b161-6ee2436ce983/volumes" Jan 30 23:00:51 crc kubenswrapper[4979]: I0130 23:00:51.211433 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 23:00:51 crc kubenswrapper[4979]: I0130 23:00:51.569295 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193159 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193866 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-content" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193880 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-content" Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193901 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-utilities" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193906 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-utilities" Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193925 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193932 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193942 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579619ae-df83-40ff-8580-331060c16faf" containerName="mariadb-account-create-update" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193949 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="579619ae-df83-40ff-8580-331060c16faf" containerName="mariadb-account-create-update" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.194148 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.194157 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="579619ae-df83-40ff-8580-331060c16faf" containerName="mariadb-account-create-update" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.195218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.204426 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.377258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.377771 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.377950 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.480299 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.480447 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.480591 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.481167 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.481314 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.506637 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.523389 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.563638 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:56 crc kubenswrapper[4979]: W0130 23:00:56.100196 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2795bb3d_be81_4873_96f6_6f3a42857827.slice/crio-89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d WatchSource:0}: Error finding container 89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d: Status 404 returned error can't find the container with id 89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d Jan 30 23:00:56 crc kubenswrapper[4979]: I0130 23:00:56.103331 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:00:56 crc kubenswrapper[4979]: I0130 23:00:56.210219 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerStarted","Data":"89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d"} Jan 30 23:00:56 crc kubenswrapper[4979]: I0130 23:00:56.488499 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:57 crc kubenswrapper[4979]: I0130 23:00:57.219111 4979 generic.go:334] "Generic (PLEG): container finished" podID="2795bb3d-be81-4873-96f6-6f3a42857827" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" exitCode=0 Jan 30 23:00:57 crc kubenswrapper[4979]: I0130 23:00:57.219159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerDied","Data":"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1"} Jan 30 23:00:57 crc kubenswrapper[4979]: I0130 23:00:57.684319 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" containerID="cri-o://80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" gracePeriod=604798 Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.228450 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerStarted","Data":"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105"} Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.228807 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.246836 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" podStartSLOduration=4.246817271 podStartE2EDuration="4.246817271s" podCreationTimestamp="2026-01-30 23:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:58.242109344 +0000 UTC m=+4854.203356377" watchObservedRunningTime="2026-01-30 23:00:58.246817271 +0000 UTC m=+4854.208064304" Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.345386 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" containerID="cri-o://2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" gracePeriod=604799 Jan 30 23:01:01 crc kubenswrapper[4979]: I0130 23:01:01.207403 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.249:5672: connect: connection refused" Jan 30 23:01:01 crc kubenswrapper[4979]: I0130 23:01:01.567705 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.250:5672: connect: connection refused" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.250431 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258465 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258692 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258815 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258866 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258906 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258933 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258959 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258996 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.260154 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.260206 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.260452 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.275291 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.280193 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info" (OuterVolumeSpecName: "pod-info") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.281376 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc" (OuterVolumeSpecName: "kube-api-access-l6bwc") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "kube-api-access-l6bwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299577 4979 generic.go:334] "Generic (PLEG): container finished" podID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" exitCode=0 Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299635 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerDied","Data":"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398"} Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerDied","Data":"a401bb2823a23f538ee4aeaa4f20fe114ad58881d1e7c333038a2c3b643757b1"} Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299701 4979 scope.go:117] "RemoveContainer" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299915 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.308381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf" (OuterVolumeSpecName: "server-conf") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362560 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362594 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362605 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362615 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362623 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362633 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362643 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.398393 4979 scope.go:117] "RemoveContainer" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.411418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb" (OuterVolumeSpecName: "persistence") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.414786 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.454192 4979 scope.go:117] "RemoveContainer" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.458127 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398\": container with ID starting with 80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398 not found: ID does not exist" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.458168 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398"} err="failed to get container status \"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398\": rpc error: code = NotFound desc = could not find container \"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398\": container with ID starting with 80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398 not found: ID does not exist" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.458193 4979 scope.go:117] "RemoveContainer" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.458474 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69\": container with ID starting with bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69 not found: ID does not exist" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.458507 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69"} err="failed to get container status \"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69\": rpc error: code = NotFound desc = could not find container \"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69\": container with ID starting with bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69 not found: ID does not exist" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.464211 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.464258 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") on node \"crc\" " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.491827 4979 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.492350 4979 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb") on node "crc" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.565075 4979 reconciler_common.go:293] "Volume detached for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.634335 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.640151 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.670455 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.670925 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="setup-container" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.670949 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="setup-container" Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.670969 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.670977 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.671424 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.672502 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.675369 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.675770 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.680670 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-25ft5" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.680694 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.680762 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.690086 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.873738 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.873897 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874001 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c14c3367-d6a7-443a-9c15-913f73eac121-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874137 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874203 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c14c3367-d6a7-443a-9c15-913f73eac121-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874333 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874394 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmf4\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-kube-api-access-rcmf4\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874443 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.971371 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.975868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c14c3367-d6a7-443a-9c15-913f73eac121-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.975952 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.975982 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmf4\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-kube-api-access-rcmf4\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976064 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976095 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c14c3367-d6a7-443a-9c15-913f73eac121-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976155 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976179 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976849 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976867 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.977589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.978160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.981660 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c14c3367-d6a7-443a-9c15-913f73eac121-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.982263 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.982332 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d01eb65d8adc4d32a35137e4c958b2a45829d9b744b41c2b35ba94851c4723/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.984796 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.986168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c14c3367-d6a7-443a-9c15-913f73eac121-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.995513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmf4\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-kube-api-access-rcmf4\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.019252 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.065966 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-25ft5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.074756 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078109 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078277 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078360 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078413 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078454 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078475 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078808 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.079662 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.080075 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.080114 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.081508 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.081571 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.081589 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.084356 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68" (OuterVolumeSpecName: "kube-api-access-s6c68") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "kube-api-access-s6c68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.085306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.100967 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77" (OuterVolumeSpecName: "persistence") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.102904 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" path="/var/lib/kubelet/pods/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c/volumes" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.106062 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info" (OuterVolumeSpecName: "pod-info") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.127812 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf" (OuterVolumeSpecName: "server-conf") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.178107 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182841 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182876 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182893 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182907 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182937 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") on node \"crc\" " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182950 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.200187 4979 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.200495 4979 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77") on node "crc" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.284257 4979 reconciler_common.go:293] "Volume detached for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320093 4979 generic.go:334] "Generic (PLEG): container finished" podID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" exitCode=0 Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320161 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerDied","Data":"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea"} Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320191 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerDied","Data":"77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc"} Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320215 4979 scope.go:117] "RemoveContainer" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320399 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.382806 4979 scope.go:117] "RemoveContainer" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.395829 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.430178 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.434989 4979 scope.go:117] "RemoveContainer" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.435440 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea\": container with ID starting with 2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea not found: ID does not exist" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.436239 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea"} err="failed to get container status \"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea\": rpc error: code = NotFound desc = could not find container \"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea\": container with ID starting with 2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea not found: ID does not exist" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.436298 4979 scope.go:117] "RemoveContainer" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.436595 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.437443 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.437473 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.437504 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="setup-container" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.437515 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="setup-container" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.437970 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.439185 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.441683 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5\": container with ID starting with fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5 not found: ID does not exist" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.441760 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5"} err="failed to get container status \"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5\": rpc error: code = NotFound desc = could not find container \"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5\": container with ID starting with fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5 not found: ID does not exist" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.445420 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.445747 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.445901 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.446622 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.448290 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vcf7n" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.459209 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.472402 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.526819 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.596538 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.596960 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" containerID="cri-o://41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059" gracePeriod=10 Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599317 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kch\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-kube-api-access-82kch\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599386 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/291b372c-0448-4bc4-88a4-e61a412ba45a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599444 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599470 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599502 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599540 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/291b372c-0448-4bc4-88a4-e61a412ba45a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599624 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599659 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700845 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700913 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82kch\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-kube-api-access-82kch\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700939 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/291b372c-0448-4bc4-88a4-e61a412ba45a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700998 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701023 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701064 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701092 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/291b372c-0448-4bc4-88a4-e61a412ba45a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701144 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701166 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701989 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.702250 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.702638 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.704178 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.705749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/291b372c-0448-4bc4-88a4-e61a412ba45a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.705864 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.707380 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.707446 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6649be050b7f075ba9ae655c5497b53ee628ceded131093e643c8c774a634b05/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.708702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/291b372c-0448-4bc4-88a4-e61a412ba45a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.724895 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kch\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-kube-api-access-82kch\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.747505 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.764976 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.292538 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:06 crc kubenswrapper[4979]: W0130 23:01:06.299695 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod291b372c_0448_4bc4_88a4_e61a412ba45a.slice/crio-323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140 WatchSource:0}: Error finding container 323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140: Status 404 returned error can't find the container with id 323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140 Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.331894 4979 generic.go:334] "Generic (PLEG): container finished" podID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerID="41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059" exitCode=0 Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.332001 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerDied","Data":"41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.332071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerDied","Data":"994ef8ca363dd40c266610987d3ec533707b724f9ddcc04659cbb378e0bcd6ba"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.332085 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994ef8ca363dd40c266610987d3ec533707b724f9ddcc04659cbb378e0bcd6ba" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.334113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerStarted","Data":"53d6d5c3023f791b5e35b41c3c8d865e43e3aa39c78f6651102d16bf2570191a"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.335536 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerStarted","Data":"323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.592495 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.718696 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"1d741010-36ef-41d3-8613-ab2d49cacfb7\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.718763 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"1d741010-36ef-41d3-8613-ab2d49cacfb7\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.718898 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"1d741010-36ef-41d3-8613-ab2d49cacfb7\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.724741 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z" (OuterVolumeSpecName: "kube-api-access-zwd9z") pod "1d741010-36ef-41d3-8613-ab2d49cacfb7" (UID: "1d741010-36ef-41d3-8613-ab2d49cacfb7"). InnerVolumeSpecName "kube-api-access-zwd9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.751705 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config" (OuterVolumeSpecName: "config") pod "1d741010-36ef-41d3-8613-ab2d49cacfb7" (UID: "1d741010-36ef-41d3-8613-ab2d49cacfb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.752599 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d741010-36ef-41d3-8613-ab2d49cacfb7" (UID: "1d741010-36ef-41d3-8613-ab2d49cacfb7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.820889 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.820934 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.820949 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.082195 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" path="/var/lib/kubelet/pods/2fdc62fc-7d4d-4f2a-9611-4011f302320a/volumes" Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.347069 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.347069 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerStarted","Data":"09631796762b28657e11525e96a349f5957cec89645ef9ef43ab94b3449842f1"} Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.399613 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.404738 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:01:08 crc kubenswrapper[4979]: I0130 23:01:08.360454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerStarted","Data":"87009f94c390821ac1ade83b4aa7515b4c96904491368c0879f4ae02975bac0c"} Jan 30 23:01:09 crc kubenswrapper[4979]: I0130 23:01:09.087022 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" path="/var/lib/kubelet/pods/1d741010-36ef-41d3-8613-ab2d49cacfb7/volumes" Jan 30 23:01:39 crc kubenswrapper[4979]: I0130 23:01:39.699423 4979 generic.go:334] "Generic (PLEG): container finished" podID="c14c3367-d6a7-443a-9c15-913f73eac121" containerID="09631796762b28657e11525e96a349f5957cec89645ef9ef43ab94b3449842f1" exitCode=0 Jan 30 23:01:39 crc kubenswrapper[4979]: I0130 23:01:39.699514 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerDied","Data":"09631796762b28657e11525e96a349f5957cec89645ef9ef43ab94b3449842f1"} Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.719251 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerStarted","Data":"d2d486f9c0e9e83665afcf1e616fb2a34661752573d9902a71e346ff5b3430e3"} Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.720093 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.722962 4979 generic.go:334] "Generic (PLEG): container finished" podID="291b372c-0448-4bc4-88a4-e61a412ba45a" containerID="87009f94c390821ac1ade83b4aa7515b4c96904491368c0879f4ae02975bac0c" exitCode=0 Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.723015 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerDied","Data":"87009f94c390821ac1ade83b4aa7515b4c96904491368c0879f4ae02975bac0c"} Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.789513 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.789481194 podStartE2EDuration="36.789481194s" podCreationTimestamp="2026-01-30 23:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:01:40.77519527 +0000 UTC m=+4896.736442363" watchObservedRunningTime="2026-01-30 23:01:40.789481194 +0000 UTC m=+4896.750728237" Jan 30 23:01:41 crc kubenswrapper[4979]: I0130 23:01:41.735499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerStarted","Data":"2a0b70b652ac3173ea2a8beea0a763672dabafb86dc545ddafb6bfd55b608b46"} Jan 30 23:01:41 crc kubenswrapper[4979]: I0130 23:01:41.736537 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:55 crc kubenswrapper[4979]: I0130 23:01:55.081248 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 23:01:55 crc kubenswrapper[4979]: I0130 23:01:55.124330 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.124309261 podStartE2EDuration="50.124309261s" podCreationTimestamp="2026-01-30 23:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:01:41.763411464 +0000 UTC m=+4897.724658517" watchObservedRunningTime="2026-01-30 23:01:55.124309261 +0000 UTC m=+4911.085556294" Jan 30 23:01:55 crc kubenswrapper[4979]: I0130 23:01:55.769901 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.179058 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:07 crc kubenswrapper[4979]: E0130 23:02:07.180184 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180200 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" Jan 30 23:02:07 crc kubenswrapper[4979]: E0130 23:02:07.180220 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="init" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180227 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="init" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180393 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180995 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.183423 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgnfz" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.187866 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.365835 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"mariadb-client\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.468846 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"mariadb-client\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.516350 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"mariadb-client\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.807302 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:08 crc kubenswrapper[4979]: I0130 23:02:08.356374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:08 crc kubenswrapper[4979]: I0130 23:02:08.984379 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerStarted","Data":"45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0"} Jan 30 23:02:08 crc kubenswrapper[4979]: I0130 23:02:08.985136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerStarted","Data":"fa39b2a93f043816f3a61766d193f90e8a471ec6816dd68b3f369617b01a06e6"} Jan 30 23:02:09 crc kubenswrapper[4979]: I0130 23:02:09.002328 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.002304224 podStartE2EDuration="2.002304224s" podCreationTimestamp="2026-01-30 23:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:02:08.997867375 +0000 UTC m=+4924.959114418" watchObservedRunningTime="2026-01-30 23:02:09.002304224 +0000 UTC m=+4924.963551257" Jan 30 23:02:13 crc kubenswrapper[4979]: E0130 23:02:13.552489 4979 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.143:42220->38.102.83.143:38353: write tcp 38.102.83.143:42220->38.102.83.143:38353: write: connection reset by peer Jan 30 23:02:22 crc kubenswrapper[4979]: I0130 23:02:22.969563 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:22 crc kubenswrapper[4979]: I0130 23:02:22.970476 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" containerID="cri-o://45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0" gracePeriod=30 Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.102407 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerID="45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0" exitCode=143 Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.102456 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerDied","Data":"45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0"} Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.458402 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.469541 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.478260 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb" (OuterVolumeSpecName: "kube-api-access-458cb") pod "fe0ac8f0-91f1-4df6-8085-199514aa8d14" (UID: "fe0ac8f0-91f1-4df6-8085-199514aa8d14"). InnerVolumeSpecName "kube-api-access-458cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.571452 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") on node \"crc\" DevicePath \"\"" Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.120968 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerDied","Data":"fa39b2a93f043816f3a61766d193f90e8a471ec6816dd68b3f369617b01a06e6"} Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.121016 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.121312 4979 scope.go:117] "RemoveContainer" containerID="45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0" Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.151343 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.157939 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:25 crc kubenswrapper[4979]: I0130 23:02:25.083894 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" path="/var/lib/kubelet/pods/fe0ac8f0-91f1-4df6-8085-199514aa8d14/volumes" Jan 30 23:02:32 crc kubenswrapper[4979]: I0130 23:02:32.039200 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:02:32 crc kubenswrapper[4979]: I0130 23:02:32.040004 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:03:02 crc kubenswrapper[4979]: I0130 23:03:02.040354 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:03:02 crc kubenswrapper[4979]: I0130 23:03:02.041122 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:03:26 crc kubenswrapper[4979]: I0130 23:03:26.211691 4979 scope.go:117] "RemoveContainer" containerID="f9b321201755262611e536dca11c7193aa5f320fa99f7da74aac970a57d934ef" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.039611 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.040395 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.040450 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.041176 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.041236 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" gracePeriod=600 Jan 30 23:03:32 crc kubenswrapper[4979]: E0130 23:03:32.167156 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.729962 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" exitCode=0 Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.730008 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6"} Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.730073 4979 scope.go:117] "RemoveContainer" containerID="3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.730631 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:03:32 crc kubenswrapper[4979]: E0130 23:03:32.730934 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:03:46 crc kubenswrapper[4979]: I0130 23:03:46.069898 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:03:46 crc kubenswrapper[4979]: E0130 23:03:46.070678 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:01 crc kubenswrapper[4979]: I0130 23:04:01.071454 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:01 crc kubenswrapper[4979]: E0130 23:04:01.072687 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:14 crc kubenswrapper[4979]: I0130 23:04:14.069444 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:14 crc kubenswrapper[4979]: E0130 23:04:14.072662 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:25 crc kubenswrapper[4979]: I0130 23:04:25.073706 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:25 crc kubenswrapper[4979]: E0130 23:04:25.077114 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:37 crc kubenswrapper[4979]: I0130 23:04:37.069355 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:37 crc kubenswrapper[4979]: E0130 23:04:37.070169 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:50 crc kubenswrapper[4979]: I0130 23:04:50.069561 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:50 crc kubenswrapper[4979]: E0130 23:04:50.070373 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:01 crc kubenswrapper[4979]: I0130 23:05:01.069586 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:01 crc kubenswrapper[4979]: E0130 23:05:01.070323 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:14 crc kubenswrapper[4979]: I0130 23:05:14.070454 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:14 crc kubenswrapper[4979]: E0130 23:05:14.071159 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:29 crc kubenswrapper[4979]: I0130 23:05:29.069967 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:29 crc kubenswrapper[4979]: E0130 23:05:29.071435 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:41 crc kubenswrapper[4979]: I0130 23:05:41.069496 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:41 crc kubenswrapper[4979]: E0130 23:05:41.070342 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:55 crc kubenswrapper[4979]: I0130 23:05:55.073861 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:55 crc kubenswrapper[4979]: E0130 23:05:55.074950 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:06 crc kubenswrapper[4979]: I0130 23:06:06.069976 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:06 crc kubenswrapper[4979]: E0130 23:06:06.070889 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:18 crc kubenswrapper[4979]: I0130 23:06:18.071953 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:18 crc kubenswrapper[4979]: E0130 23:06:18.073092 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:26 crc kubenswrapper[4979]: I0130 23:06:26.311792 4979 scope.go:117] "RemoveContainer" containerID="e92999cfafdeac8211d5158e0746bbde23f4c02a545a5abc5507e1fcf7782d7c" Jan 30 23:06:26 crc kubenswrapper[4979]: I0130 23:06:26.345328 4979 scope.go:117] "RemoveContainer" containerID="41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059" Jan 30 23:06:26 crc kubenswrapper[4979]: I0130 23:06:26.391268 4979 scope.go:117] "RemoveContainer" containerID="cf41beecee7e20e5f9c898a20d24e379b95780b90e8224537101891074538c0d" Jan 30 23:06:32 crc kubenswrapper[4979]: I0130 23:06:32.070103 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:32 crc kubenswrapper[4979]: E0130 23:06:32.071118 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:47 crc kubenswrapper[4979]: I0130 23:06:47.070571 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:47 crc kubenswrapper[4979]: E0130 23:06:47.072793 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.049742 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 23:06:58 crc kubenswrapper[4979]: E0130 23:06:58.050698 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.050716 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.050877 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.051448 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.053846 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgnfz" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.059854 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.241718 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.241865 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2dz\" (UniqueName: \"kubernetes.io/projected/74f9350b-6f51-40b4-85a5-be1ffad9eb0c-kube-api-access-2j2dz\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.342695 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2dz\" (UniqueName: \"kubernetes.io/projected/74f9350b-6f51-40b4-85a5-be1ffad9eb0c-kube-api-access-2j2dz\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.342788 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.346235 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.346288 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/632df6c27a56bcf278d2460de3056861e6548f56376a2497724cfc36261c4e22/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.363774 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2dz\" (UniqueName: \"kubernetes.io/projected/74f9350b-6f51-40b4-85a5-be1ffad9eb0c-kube-api-access-2j2dz\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.392999 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.677317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 23:06:59 crc kubenswrapper[4979]: I0130 23:06:59.070536 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:59 crc kubenswrapper[4979]: E0130 23:06:59.071544 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:59 crc kubenswrapper[4979]: I0130 23:06:59.237596 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 23:07:00 crc kubenswrapper[4979]: I0130 23:07:00.150675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"74f9350b-6f51-40b4-85a5-be1ffad9eb0c","Type":"ContainerStarted","Data":"edacbc61ffb4c9696cdc3f0b53d6ad7feb8b1e4ae8181ed8f18e11106e85d136"} Jan 30 23:07:00 crc kubenswrapper[4979]: I0130 23:07:00.150716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"74f9350b-6f51-40b4-85a5-be1ffad9eb0c","Type":"ContainerStarted","Data":"3d00142cf672b613344d5691e7ff65b1551f927eb9a94aa238aa3d66c45fa533"} Jan 30 23:07:00 crc kubenswrapper[4979]: I0130 23:07:00.170413 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.170389668 podStartE2EDuration="3.170389668s" podCreationTimestamp="2026-01-30 23:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:00.165948188 +0000 UTC m=+5216.127195261" watchObservedRunningTime="2026-01-30 23:07:00.170389668 +0000 UTC m=+5216.131636711" Jan 30 23:07:02 crc kubenswrapper[4979]: I0130 23:07:02.958941 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:02 crc kubenswrapper[4979]: I0130 23:07:02.960265 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:02 crc kubenswrapper[4979]: I0130 23:07:02.969296 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.123178 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"mariadb-client\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.225150 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"mariadb-client\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.249433 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"mariadb-client\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.285907 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.787379 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:04 crc kubenswrapper[4979]: I0130 23:07:04.182567 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerID="dad83fe6e0dd13f90e65510d87c2454c3b37aa1abc0bae6f460d76fcaed45b7c" exitCode=0 Jan 30 23:07:04 crc kubenswrapper[4979]: I0130 23:07:04.182699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d5c3d722-f00d-4176-95e2-be3e349e9be4","Type":"ContainerDied","Data":"dad83fe6e0dd13f90e65510d87c2454c3b37aa1abc0bae6f460d76fcaed45b7c"} Jan 30 23:07:04 crc kubenswrapper[4979]: I0130 23:07:04.182885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d5c3d722-f00d-4176-95e2-be3e349e9be4","Type":"ContainerStarted","Data":"d6e44c0eb41f535c1f9148ea313afd36baeddf318b670edacb1d512e409f16fb"} Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.567004 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.608949 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d5c3d722-f00d-4176-95e2-be3e349e9be4/mariadb-client/0.log" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.643752 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.651431 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.667874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"d5c3d722-f00d-4176-95e2-be3e349e9be4\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.675477 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp" (OuterVolumeSpecName: "kube-api-access-nq7zp") pod "d5c3d722-f00d-4176-95e2-be3e349e9be4" (UID: "d5c3d722-f00d-4176-95e2-be3e349e9be4"). InnerVolumeSpecName "kube-api-access-nq7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.767497 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: E0130 23:07:05.767993 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerName="mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.768017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerName="mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.768262 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerName="mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.768907 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.771186 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.773395 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.873475 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"mariadb-client\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.975002 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"mariadb-client\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.995873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"mariadb-client\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " pod="openstack/mariadb-client" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.091793 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.206181 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6e44c0eb41f535c1f9148ea313afd36baeddf318b670edacb1d512e409f16fb" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.206680 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.229782 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.545539 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.090106 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" path="/var/lib/kubelet/pods/d5c3d722-f00d-4176-95e2-be3e349e9be4/volumes" Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.215163 4979 generic.go:334] "Generic (PLEG): container finished" podID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerID="1414499c1891378760db0224adf8dc6e194b9af0ea5d3cf09c3a9e612363f6b8" exitCode=0 Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.215215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"eeb09949-6907-4277-8d3e-1b0090b437ab","Type":"ContainerDied","Data":"1414499c1891378760db0224adf8dc6e194b9af0ea5d3cf09c3a9e612363f6b8"} Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.215246 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"eeb09949-6907-4277-8d3e-1b0090b437ab","Type":"ContainerStarted","Data":"a2b63c5fb36cb3fc1bf6a1bc440f232da7b94e99a04d42fdb97f1b6ef9ede3d4"} Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.559156 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.580864 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_eeb09949-6907-4277-8d3e-1b0090b437ab/mariadb-client/0.log" Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.608268 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.617120 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.714766 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"eeb09949-6907-4277-8d3e-1b0090b437ab\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.721917 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9" (OuterVolumeSpecName: "kube-api-access-8vkk9") pod "eeb09949-6907-4277-8d3e-1b0090b437ab" (UID: "eeb09949-6907-4277-8d3e-1b0090b437ab"). InnerVolumeSpecName "kube-api-access-8vkk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.816779 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:09 crc kubenswrapper[4979]: I0130 23:07:09.083651 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" path="/var/lib/kubelet/pods/eeb09949-6907-4277-8d3e-1b0090b437ab/volumes" Jan 30 23:07:09 crc kubenswrapper[4979]: I0130 23:07:09.231171 4979 scope.go:117] "RemoveContainer" containerID="1414499c1891378760db0224adf8dc6e194b9af0ea5d3cf09c3a9e612363f6b8" Jan 30 23:07:09 crc kubenswrapper[4979]: I0130 23:07:09.231273 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:11 crc kubenswrapper[4979]: I0130 23:07:11.070561 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:11 crc kubenswrapper[4979]: E0130 23:07:11.071377 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:23 crc kubenswrapper[4979]: I0130 23:07:23.069832 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:23 crc kubenswrapper[4979]: E0130 23:07:23.071821 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:36 crc kubenswrapper[4979]: I0130 23:07:36.069498 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:36 crc kubenswrapper[4979]: E0130 23:07:36.070363 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.224149 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: E0130 23:07:41.224992 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerName="mariadb-client" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.225006 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerName="mariadb-client" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.225172 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerName="mariadb-client" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.225925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.228360 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qxjtc" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.228431 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.229098 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.245667 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.246910 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.264400 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.266431 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.276928 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.287085 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.321534 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341187 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-config\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341236 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6afaef21-c973-4ec1-ae90-f3c9b603f713-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341257 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afaef21-c973-4ec1-ae90-f3c9b603f713-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fbe256e-5861-4bd2-b76d-a53f79b48380-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341294 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341325 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341369 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7dr\" (UniqueName: \"kubernetes.io/projected/7fbe256e-5861-4bd2-b76d-a53f79b48380-kube-api-access-cc7dr\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-config\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341476 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvz8c\" (UniqueName: \"kubernetes.io/projected/6afaef21-c973-4ec1-ae90-f3c9b603f713-kube-api-access-vvz8c\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341512 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-config\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341534 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341568 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhjd\" (UniqueName: \"kubernetes.io/projected/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-kube-api-access-zkhjd\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341600 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbe256e-5861-4bd2-b76d-a53f79b48380-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341644 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341734 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341774 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443160 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7dr\" (UniqueName: \"kubernetes.io/projected/7fbe256e-5861-4bd2-b76d-a53f79b48380-kube-api-access-cc7dr\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443262 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-config\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvz8c\" (UniqueName: \"kubernetes.io/projected/6afaef21-c973-4ec1-ae90-f3c9b603f713-kube-api-access-vvz8c\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443328 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443354 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhjd\" (UniqueName: \"kubernetes.io/projected/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-kube-api-access-zkhjd\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-config\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443403 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbe256e-5861-4bd2-b76d-a53f79b48380-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443451 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443484 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443507 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443623 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6afaef21-c973-4ec1-ae90-f3c9b603f713-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444253 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6afaef21-c973-4ec1-ae90-f3c9b603f713-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444414 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-config\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-config\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444800 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444982 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443652 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afaef21-c973-4ec1-ae90-f3c9b603f713-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.445668 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-config\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.446920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-config\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.446963 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fbe256e-5861-4bd2-b76d-a53f79b48380-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.446966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fbe256e-5861-4bd2-b76d-a53f79b48380-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.447078 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.447105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.447620 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449661 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449697 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c4311504aa9ddb87cb58b309caa6648fd7afe05a49693bfa2a051e7126a9c4f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449831 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449861 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/918694f6df4459f5128a01366ad2648fae189e6c4cb8a5e1b5ff346c136bec2e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.451663 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.453581 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.454455 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.456969 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/68f121d80a25a2dfded0aacfd93a0c9c0b91744b1671c55870391288bebd4413/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.457257 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afaef21-c973-4ec1-ae90-f3c9b603f713-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.461477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbe256e-5861-4bd2-b76d-a53f79b48380-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.468642 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhjd\" (UniqueName: \"kubernetes.io/projected/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-kube-api-access-zkhjd\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.468787 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7dr\" (UniqueName: \"kubernetes.io/projected/7fbe256e-5861-4bd2-b76d-a53f79b48380-kube-api-access-cc7dr\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.469877 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvz8c\" (UniqueName: \"kubernetes.io/projected/6afaef21-c973-4ec1-ae90-f3c9b603f713-kube-api-access-vvz8c\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.474370 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.500843 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.505955 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.508092 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gws9k" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.508304 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.508957 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.509661 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.512470 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.524779 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.527063 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.531685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.541154 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551714 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551829 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-config\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551878 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e971ad9f-b09c-4504-8caf-f6c9f0801e00-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551933 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glw4r\" (UniqueName: \"kubernetes.io/projected/e971ad9f-b09c-4504-8caf-f6c9f0801e00-kube-api-access-glw4r\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551968 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.552099 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e971ad9f-b09c-4504-8caf-f6c9f0801e00-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551826 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.558847 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.570629 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.585917 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.622136 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.633111 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.663859 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.663944 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcq5\" (UniqueName: \"kubernetes.io/projected/755c668a-a4c9-4a52-901d-338208af4efb-kube-api-access-7pcq5\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.663989 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-config\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664114 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664143 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664166 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e971ad9f-b09c-4504-8caf-f6c9f0801e00-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664190 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-config\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664215 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b0076344-a5b2-4fef-8a6f-28b6194b850e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664239 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0076344-a5b2-4fef-8a6f-28b6194b850e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664268 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664341 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-config\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664368 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e971ad9f-b09c-4504-8caf-f6c9f0801e00-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664391 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/755c668a-a4c9-4a52-901d-338208af4efb-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664420 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755c668a-a4c9-4a52-901d-338208af4efb-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664445 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6t8\" (UniqueName: \"kubernetes.io/projected/b0076344-a5b2-4fef-8a6f-28b6194b850e-kube-api-access-6n6t8\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664466 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glw4r\" (UniqueName: \"kubernetes.io/projected/e971ad9f-b09c-4504-8caf-f6c9f0801e00-kube-api-access-glw4r\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664492 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.666109 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e971ad9f-b09c-4504-8caf-f6c9f0801e00-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.666643 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.667949 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-config\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.675716 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.675768 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5e9743c8090485d3d2afb43a58a0244989161bfb733c2eac195ad668a813c39f/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.686175 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e971ad9f-b09c-4504-8caf-f6c9f0801e00-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.696330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glw4r\" (UniqueName: \"kubernetes.io/projected/e971ad9f-b09c-4504-8caf-f6c9f0801e00-kube-api-access-glw4r\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.709720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768000 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-config\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768194 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768347 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-config\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768668 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b0076344-a5b2-4fef-8a6f-28b6194b850e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768705 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0076344-a5b2-4fef-8a6f-28b6194b850e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768729 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768807 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/755c668a-a4c9-4a52-901d-338208af4efb-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768837 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755c668a-a4c9-4a52-901d-338208af4efb-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768861 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6t8\" (UniqueName: \"kubernetes.io/projected/b0076344-a5b2-4fef-8a6f-28b6194b850e-kube-api-access-6n6t8\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcq5\" (UniqueName: \"kubernetes.io/projected/755c668a-a4c9-4a52-901d-338208af4efb-kube-api-access-7pcq5\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.770424 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-config\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771134 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b0076344-a5b2-4fef-8a6f-28b6194b850e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771401 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-config\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771961 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.772481 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/755c668a-a4c9-4a52-901d-338208af4efb-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777412 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777457 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51b6a8149a781e83bc033c481ee8447251e18fbab398dcdd6705f5556202058c/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777484 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755c668a-a4c9-4a52-901d-338208af4efb-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777665 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777730 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/876aca8e7382817e075ed153eb3238980fa4dbca8f862d2759510bfc684ac158/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.794575 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcq5\" (UniqueName: \"kubernetes.io/projected/755c668a-a4c9-4a52-901d-338208af4efb-kube-api-access-7pcq5\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.796759 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6t8\" (UniqueName: \"kubernetes.io/projected/b0076344-a5b2-4fef-8a6f-28b6194b850e-kube-api-access-6n6t8\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.797651 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0076344-a5b2-4fef-8a6f-28b6194b850e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.808899 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.817238 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:41.964353 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:41.994724 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.002016 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.106139 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.223611 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.507792 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6afaef21-c973-4ec1-ae90-f3c9b603f713","Type":"ContainerStarted","Data":"d0c39143de06ebe9cd1b12315fb4ecd2d7dd2c065bcc788d939246493e0bd6f9"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.508345 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6afaef21-c973-4ec1-ae90-f3c9b603f713","Type":"ContainerStarted","Data":"5c2bf53cd937d5ac0e7a882b42a4a8a3628eea12fdc0fc51eef6ec3827038da2"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.508358 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6afaef21-c973-4ec1-ae90-f3c9b603f713","Type":"ContainerStarted","Data":"2a4e15528c0dedeca1ac8f76b1da55061eb792bda36e6799859912e5b5c1933e"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.510152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"7fbe256e-5861-4bd2-b76d-a53f79b48380","Type":"ContainerStarted","Data":"61030cb286abaa2c9f8538b45fc88d8936b1ea892fb44474226c07b6da0a542a"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.510236 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"7fbe256e-5861-4bd2-b76d-a53f79b48380","Type":"ContainerStarted","Data":"2c2f621adbee286f2e786d8f506dbf81838394c0dd92c3525de6ec27bd1e7837"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.510260 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"7fbe256e-5861-4bd2-b76d-a53f79b48380","Type":"ContainerStarted","Data":"bd2580ba7b23b4b69e922fecf9c587461f8a071ca435bdaab003995368b920f2"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.537556 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.537533592 podStartE2EDuration="2.537533592s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:42.531808209 +0000 UTC m=+5258.493055242" watchObservedRunningTime="2026-01-30 23:07:42.537533592 +0000 UTC m=+5258.498780635" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.557521 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=2.557501239 podStartE2EDuration="2.557501239s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:42.555772463 +0000 UTC m=+5258.517019496" watchObservedRunningTime="2026-01-30 23:07:42.557501239 +0000 UTC m=+5258.518748272" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.955246 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 23:07:42 crc kubenswrapper[4979]: W0130 23:07:42.960268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod977a1b80_05e8_4d3c_acbb_e9ea09b98ab0.slice/crio-70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686 WatchSource:0}: Error finding container 70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686: Status 404 returned error can't find the container with id 70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686 Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.063592 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 23:07:43 crc kubenswrapper[4979]: W0130 23:07:43.071631 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode971ad9f_b09c_4504_8caf_f6c9f0801e00.slice/crio-3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d WatchSource:0}: Error finding container 3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d: Status 404 returned error can't find the container with id 3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.520168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0","Type":"ContainerStarted","Data":"3223feaa98801b01329ed187a8540fdad9740917d05dcb1d5b0cb5aa4d469e6a"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.520215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0","Type":"ContainerStarted","Data":"c80a588eae18f9547cfdc9e48db31fd672f165661f20b21bdb48d2c83a28a666"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.520226 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0","Type":"ContainerStarted","Data":"70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.524675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e971ad9f-b09c-4504-8caf-f6c9f0801e00","Type":"ContainerStarted","Data":"260cd81d474b710574b3f8a6eb3110375102080c90db56c5b42473efb73a8ec1"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.524746 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e971ad9f-b09c-4504-8caf-f6c9f0801e00","Type":"ContainerStarted","Data":"b48e835aa0a360aa32ea8c4737c8052fef8e01380236ba4013f36b0ea86a378e"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.524764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e971ad9f-b09c-4504-8caf-f6c9f0801e00","Type":"ContainerStarted","Data":"3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.542429 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.542408795 podStartE2EDuration="3.542408795s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:43.539725893 +0000 UTC m=+5259.500972936" watchObservedRunningTime="2026-01-30 23:07:43.542408795 +0000 UTC m=+5259.503655828" Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.566677 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.566655097 podStartE2EDuration="3.566655097s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:43.558388504 +0000 UTC m=+5259.519635537" watchObservedRunningTime="2026-01-30 23:07:43.566655097 +0000 UTC m=+5259.527902130" Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.638621 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.987091 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.535217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"b0076344-a5b2-4fef-8a6f-28b6194b850e","Type":"ContainerStarted","Data":"d6ae2c685aae9731a910ef68b09d5270f6864e0ae9eb4a1760ea7adc17914b78"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.535646 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"b0076344-a5b2-4fef-8a6f-28b6194b850e","Type":"ContainerStarted","Data":"433f5d079d3986f10a9eccf233a36e463493a6aba9305e164e67a3aacaea7d61"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.535664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"b0076344-a5b2-4fef-8a6f-28b6194b850e","Type":"ContainerStarted","Data":"02877d77699bf44f061f8ed0899d78a733535b092cc278a0e3accdef72ed5fd6"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.538255 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"755c668a-a4c9-4a52-901d-338208af4efb","Type":"ContainerStarted","Data":"e248a953d27976cbeb11142e924262b8c9b9e76adc6cb183594654190713a44a"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.538294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"755c668a-a4c9-4a52-901d-338208af4efb","Type":"ContainerStarted","Data":"60b35e06f342ac087469e8c2e5b961f83d746bb94b3b819c2b659c7be2dbad09"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.538304 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"755c668a-a4c9-4a52-901d-338208af4efb","Type":"ContainerStarted","Data":"a626721da3fc152a6144e6a16d0b89b334dd123ac91b67cc1ea5e5389daf2143"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.559397 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.562135 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.562112396 podStartE2EDuration="4.562112396s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:44.55185647 +0000 UTC m=+5260.513103543" watchObservedRunningTime="2026-01-30 23:07:44.562112396 +0000 UTC m=+5260.523359469" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.581507 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.581482447 podStartE2EDuration="4.581482447s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:44.575990729 +0000 UTC m=+5260.537237792" watchObservedRunningTime="2026-01-30 23:07:44.581482447 +0000 UTC m=+5260.542729520" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.622498 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.634223 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.964908 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.996099 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:45 crc kubenswrapper[4979]: I0130 23:07:45.002299 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.559982 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.622318 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.633866 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.965180 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.995002 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.002506 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.069927 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:47 crc kubenswrapper[4979]: E0130 23:07:47.070434 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.603292 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.651533 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.685059 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.687565 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.747107 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.891321 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.892696 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.896493 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.910090 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.985440 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.987067 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.987218 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.987248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.013478 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.040154 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.054790 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.071635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088706 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088767 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088794 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.089764 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.090416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.090604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.127567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.226561 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.391503 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.440981 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.442337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.448276 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.460693 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.509674 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.511677 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.529939 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595788 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595869 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595955 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595976 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595990 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.596009 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.596053 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.617737 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697555 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697665 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697763 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697986 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.698018 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.698105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.698123 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.699571 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.700920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.701566 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.701883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.702591 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.703429 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.720393 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.735907 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.801171 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.837879 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.865189 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.177926 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.459989 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:07:49 crc kubenswrapper[4979]: W0130 23:07:49.471466 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf817e1e3_576c_45c4_9049_44f021907fa8.slice/crio-85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5 WatchSource:0}: Error finding container 85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5: Status 404 returned error can't find the container with id 85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5 Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.586803 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerStarted","Data":"85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.588872 4979 generic.go:334] "Generic (PLEG): container finished" podID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" exitCode=0 Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.588933 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerDied","Data":"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.588952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerStarted","Data":"d2099c11bd8a88809e6380887ce5ad437e53d8e142c196904be1e3882261f67b"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.590115 4979 generic.go:334] "Generic (PLEG): container finished" podID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerID="7cfbab04a2120345ec1c2d8a670ed2de555c3fb869f81a6829e49293943f6184" exitCode=0 Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.591231 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" event={"ID":"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc","Type":"ContainerDied","Data":"7cfbab04a2120345ec1c2d8a670ed2de555c3fb869f81a6829e49293943f6184"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.591264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" event={"ID":"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc","Type":"ContainerStarted","Data":"41781b2cee80c0df7c286cbd20d0539b6f1e9ba1cd6b59a15c3498a6ad3139f2"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.866680 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918743 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918896 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918953 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.923345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd" (OuterVolumeSpecName: "kube-api-access-fd6hd") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "kube-api-access-fd6hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.937998 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config" (OuterVolumeSpecName: "config") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.938337 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.939312 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021488 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021524 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021533 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021544 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.602170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" event={"ID":"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc","Type":"ContainerDied","Data":"41781b2cee80c0df7c286cbd20d0539b6f1e9ba1cd6b59a15c3498a6ad3139f2"} Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.602231 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.602530 4979 scope.go:117] "RemoveContainer" containerID="7cfbab04a2120345ec1c2d8a670ed2de555c3fb869f81a6829e49293943f6184" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.605495 4979 generic.go:334] "Generic (PLEG): container finished" podID="f817e1e3-576c-45c4-9049-44f021907fa8" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" exitCode=0 Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.605931 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3"} Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.608808 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.614750 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerStarted","Data":"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a"} Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.615380 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.657504 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" podStartSLOduration=2.657485006 podStartE2EDuration="2.657485006s" podCreationTimestamp="2026-01-30 23:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:50.655424571 +0000 UTC m=+5266.616671604" watchObservedRunningTime="2026-01-30 23:07:50.657485006 +0000 UTC m=+5266.618732039" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.718145 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.723147 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:51 crc kubenswrapper[4979]: I0130 23:07:51.089419 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" path="/var/lib/kubelet/pods/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc/volumes" Jan 30 23:07:51 crc kubenswrapper[4979]: I0130 23:07:51.624614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerStarted","Data":"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746"} Jan 30 23:07:51 crc kubenswrapper[4979]: I0130 23:07:51.677423 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:52 crc kubenswrapper[4979]: I0130 23:07:52.050105 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:52 crc kubenswrapper[4979]: I0130 23:07:52.638235 4979 generic.go:334] "Generic (PLEG): container finished" podID="f817e1e3-576c-45c4-9049-44f021907fa8" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" exitCode=0 Jan 30 23:07:52 crc kubenswrapper[4979]: I0130 23:07:52.638290 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746"} Jan 30 23:07:53 crc kubenswrapper[4979]: I0130 23:07:53.651363 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerStarted","Data":"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4"} Jan 30 23:07:53 crc kubenswrapper[4979]: I0130 23:07:53.677534 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dvltl" podStartSLOduration=3.245274332 podStartE2EDuration="5.677512128s" podCreationTimestamp="2026-01-30 23:07:48 +0000 UTC" firstStartedPulling="2026-01-30 23:07:50.608092998 +0000 UTC m=+5266.569340071" lastFinishedPulling="2026-01-30 23:07:53.040330824 +0000 UTC m=+5269.001577867" observedRunningTime="2026-01-30 23:07:53.668741242 +0000 UTC m=+5269.629988285" watchObservedRunningTime="2026-01-30 23:07:53.677512128 +0000 UTC m=+5269.638759171" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.252591 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 23:07:54 crc kubenswrapper[4979]: E0130 23:07:54.252932 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerName="init" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.252945 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerName="init" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.253108 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerName="init" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.253745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.256363 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.277147 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.404163 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.404254 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ssf9\" (UniqueName: \"kubernetes.io/projected/43991b8d-f7aa-479c-9d38-e19114106e81-kube-api-access-5ssf9\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.404364 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/43991b8d-f7aa-479c-9d38-e19114106e81-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.505324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/43991b8d-f7aa-479c-9d38-e19114106e81-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.505421 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.505449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ssf9\" (UniqueName: \"kubernetes.io/projected/43991b8d-f7aa-479c-9d38-e19114106e81-kube-api-access-5ssf9\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.509560 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.509776 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d5214e6110063cd82083e5ae2f81858da3dca43ff685751cb7d855e8e239e21a/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.514558 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/43991b8d-f7aa-479c-9d38-e19114106e81-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.535284 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ssf9\" (UniqueName: \"kubernetes.io/projected/43991b8d-f7aa-479c-9d38-e19114106e81-kube-api-access-5ssf9\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.562948 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.576789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 23:07:55 crc kubenswrapper[4979]: I0130 23:07:55.235967 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 23:07:55 crc kubenswrapper[4979]: W0130 23:07:55.257137 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43991b8d_f7aa_479c_9d38_e19114106e81.slice/crio-e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394 WatchSource:0}: Error finding container e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394: Status 404 returned error can't find the container with id e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394 Jan 30 23:07:55 crc kubenswrapper[4979]: I0130 23:07:55.679509 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"43991b8d-f7aa-479c-9d38-e19114106e81","Type":"ContainerStarted","Data":"fffbd7506642e3425021c1c24a8af84b0728ddffdb32adbffca07e12bb99bc2e"} Jan 30 23:07:55 crc kubenswrapper[4979]: I0130 23:07:55.679553 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"43991b8d-f7aa-479c-9d38-e19114106e81","Type":"ContainerStarted","Data":"e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394"} Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.070476 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:58 crc kubenswrapper[4979]: E0130 23:07:58.071768 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.803876 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.835779 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=5.835747358 podStartE2EDuration="5.835747358s" podCreationTimestamp="2026-01-30 23:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:55.694534058 +0000 UTC m=+5271.655781091" watchObservedRunningTime="2026-01-30 23:07:58.835747358 +0000 UTC m=+5274.796994411" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.839584 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.839762 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.889951 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.890363 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" containerID="cri-o://57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" gracePeriod=10 Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.927589 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.393020 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.400246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"2795bb3d-be81-4873-96f6-6f3a42857827\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.400297 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"2795bb3d-be81-4873-96f6-6f3a42857827\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.400485 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"2795bb3d-be81-4873-96f6-6f3a42857827\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.420532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5" (OuterVolumeSpecName: "kube-api-access-9l5q5") pod "2795bb3d-be81-4873-96f6-6f3a42857827" (UID: "2795bb3d-be81-4873-96f6-6f3a42857827"). InnerVolumeSpecName "kube-api-access-9l5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.504917 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.533215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config" (OuterVolumeSpecName: "config") pod "2795bb3d-be81-4873-96f6-6f3a42857827" (UID: "2795bb3d-be81-4873-96f6-6f3a42857827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.552693 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2795bb3d-be81-4873-96f6-6f3a42857827" (UID: "2795bb3d-be81-4873-96f6-6f3a42857827"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.613400 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.613452 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.737736 4979 generic.go:334] "Generic (PLEG): container finished" podID="2795bb3d-be81-4873-96f6-6f3a42857827" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" exitCode=0 Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.738536 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerDied","Data":"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105"} Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.738468 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.739076 4979 scope.go:117] "RemoveContainer" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.739716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerDied","Data":"89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d"} Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.765934 4979 scope.go:117] "RemoveContainer" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.797766 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.799495 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.801188 4979 scope.go:117] "RemoveContainer" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" Jan 30 23:07:59 crc kubenswrapper[4979]: E0130 23:07:59.802124 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105\": container with ID starting with 57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105 not found: ID does not exist" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.802190 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105"} err="failed to get container status \"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105\": rpc error: code = NotFound desc = could not find container \"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105\": container with ID starting with 57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105 not found: ID does not exist" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.802229 4979 scope.go:117] "RemoveContainer" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" Jan 30 23:07:59 crc kubenswrapper[4979]: E0130 23:07:59.802552 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1\": container with ID starting with ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1 not found: ID does not exist" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.802582 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1"} err="failed to get container status \"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1\": rpc error: code = NotFound desc = could not find container \"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1\": container with ID starting with ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1 not found: ID does not exist" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.804989 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.854205 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.631722 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 23:08:00 crc kubenswrapper[4979]: E0130 23:08:00.632627 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="init" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.632644 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="init" Jan 30 23:08:00 crc kubenswrapper[4979]: E0130 23:08:00.632670 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.632679 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.632876 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.634115 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.645306 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.645680 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dtdc9" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.650610 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.673929 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.737505 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-scripts\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.737847 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-config\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.737949 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89760273-d9f8-4c51-8af9-4a651cadc92c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.738182 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89760273-d9f8-4c51-8af9-4a651cadc92c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.738257 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qprwt\" (UniqueName: \"kubernetes.io/projected/89760273-d9f8-4c51-8af9-4a651cadc92c-kube-api-access-qprwt\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.839886 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-scripts\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840099 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-config\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840134 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89760273-d9f8-4c51-8af9-4a651cadc92c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840182 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89760273-d9f8-4c51-8af9-4a651cadc92c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840220 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qprwt\" (UniqueName: \"kubernetes.io/projected/89760273-d9f8-4c51-8af9-4a651cadc92c-kube-api-access-qprwt\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.841015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89760273-d9f8-4c51-8af9-4a651cadc92c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.841312 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-scripts\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.841477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-config\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.847901 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89760273-d9f8-4c51-8af9-4a651cadc92c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.895736 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qprwt\" (UniqueName: \"kubernetes.io/projected/89760273-d9f8-4c51-8af9-4a651cadc92c-kube-api-access-qprwt\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.970886 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.108179 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" path="/var/lib/kubelet/pods/2795bb3d-be81-4873-96f6-6f3a42857827/volumes" Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.519974 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 23:08:01 crc kubenswrapper[4979]: W0130 23:08:01.540426 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89760273_d9f8_4c51_8af9_4a651cadc92c.slice/crio-d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9 WatchSource:0}: Error finding container d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9: Status 404 returned error can't find the container with id d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9 Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.759274 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dvltl" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" containerID="cri-o://2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" gracePeriod=2 Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.759691 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89760273-d9f8-4c51-8af9-4a651cadc92c","Type":"ContainerStarted","Data":"1e5c472af3d3bc83452fb0c302229ac81f4af1dc7db898a66c81f256ced076aa"} Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.759783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89760273-d9f8-4c51-8af9-4a651cadc92c","Type":"ContainerStarted","Data":"d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.282201 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.468175 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"f817e1e3-576c-45c4-9049-44f021907fa8\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.468888 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"f817e1e3-576c-45c4-9049-44f021907fa8\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.469803 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities" (OuterVolumeSpecName: "utilities") pod "f817e1e3-576c-45c4-9049-44f021907fa8" (UID: "f817e1e3-576c-45c4-9049-44f021907fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.472423 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"f817e1e3-576c-45c4-9049-44f021907fa8\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.473569 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.481810 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw" (OuterVolumeSpecName: "kube-api-access-jndjw") pod "f817e1e3-576c-45c4-9049-44f021907fa8" (UID: "f817e1e3-576c-45c4-9049-44f021907fa8"). InnerVolumeSpecName "kube-api-access-jndjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.517748 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f817e1e3-576c-45c4-9049-44f021907fa8" (UID: "f817e1e3-576c-45c4-9049-44f021907fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.575130 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.575173 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.779126 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89760273-d9f8-4c51-8af9-4a651cadc92c","Type":"ContainerStarted","Data":"1b05c028b4ab78dc76c6d91db484ae9571bc66f4b5b38e59b39f4c47dc098409"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.779322 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785350 4979 generic.go:334] "Generic (PLEG): container finished" podID="f817e1e3-576c-45c4-9049-44f021907fa8" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" exitCode=0 Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785395 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785437 4979 scope.go:117] "RemoveContainer" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785625 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.806637 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.806593708 podStartE2EDuration="2.806593708s" podCreationTimestamp="2026-01-30 23:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:02.804716058 +0000 UTC m=+5278.765963091" watchObservedRunningTime="2026-01-30 23:08:02.806593708 +0000 UTC m=+5278.767840741" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.810480 4979 scope.go:117] "RemoveContainer" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.842535 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.854221 4979 scope.go:117] "RemoveContainer" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.854989 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.887964 4979 scope.go:117] "RemoveContainer" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" Jan 30 23:08:02 crc kubenswrapper[4979]: E0130 23:08:02.890064 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4\": container with ID starting with 2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4 not found: ID does not exist" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.890186 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4"} err="failed to get container status \"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4\": rpc error: code = NotFound desc = could not find container \"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4\": container with ID starting with 2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4 not found: ID does not exist" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.890250 4979 scope.go:117] "RemoveContainer" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" Jan 30 23:08:02 crc kubenswrapper[4979]: E0130 23:08:02.890967 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746\": container with ID starting with f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746 not found: ID does not exist" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.891076 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746"} err="failed to get container status \"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746\": rpc error: code = NotFound desc = could not find container \"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746\": container with ID starting with f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746 not found: ID does not exist" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.891110 4979 scope.go:117] "RemoveContainer" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" Jan 30 23:08:02 crc kubenswrapper[4979]: E0130 23:08:02.892578 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3\": container with ID starting with d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3 not found: ID does not exist" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.892633 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3"} err="failed to get container status \"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3\": rpc error: code = NotFound desc = could not find container \"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3\": container with ID starting with d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3 not found: ID does not exist" Jan 30 23:08:03 crc kubenswrapper[4979]: I0130 23:08:03.082347 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" path="/var/lib/kubelet/pods/f817e1e3-576c-45c4-9049-44f021907fa8/volumes" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.687480 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:08:05 crc kubenswrapper[4979]: E0130 23:08:05.688575 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688593 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" Jan 30 23:08:05 crc kubenswrapper[4979]: E0130 23:08:05.688633 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-utilities" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688640 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-utilities" Jan 30 23:08:05 crc kubenswrapper[4979]: E0130 23:08:05.688658 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-content" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688665 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-content" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688835 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.689559 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.696328 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.697263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.699352 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.706192 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.716682 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837450 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837526 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837636 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837730 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939457 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939516 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.940589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.940704 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.977855 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.978251 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.016493 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.037203 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.451272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.515175 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.824399 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerStarted","Data":"958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.824874 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerStarted","Data":"0d25bcb42e6f27051a89d31c544da700a7cc3453f311a25ece7eec6ac94cf26c"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.828733 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerStarted","Data":"519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.828784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerStarted","Data":"6515d8e233a2fa628ba23c75b309f7db000a664e3978a0df9d408d36f4c87c75"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.840706 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-fcp6h" podStartSLOduration=1.8406824880000001 podStartE2EDuration="1.840682488s" podCreationTimestamp="2026-01-30 23:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:06.838366486 +0000 UTC m=+5282.799613519" watchObservedRunningTime="2026-01-30 23:08:06.840682488 +0000 UTC m=+5282.801929521" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.862576 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e97b-account-create-update-7kkdr" podStartSLOduration=1.862556916 podStartE2EDuration="1.862556916s" podCreationTimestamp="2026-01-30 23:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:06.855721612 +0000 UTC m=+5282.816968635" watchObservedRunningTime="2026-01-30 23:08:06.862556916 +0000 UTC m=+5282.823803949" Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.845464 4979 generic.go:334] "Generic (PLEG): container finished" podID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerID="519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983" exitCode=0 Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.845614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerDied","Data":"519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983"} Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.851101 4979 generic.go:334] "Generic (PLEG): container finished" podID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerID="958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449" exitCode=0 Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.851161 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerDied","Data":"958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449"} Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.349858 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.369448 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.420749 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"cd1984c3-c561-48d8-8e99-a596088b25b7\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.420894 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"cd1984c3-c561-48d8-8e99-a596088b25b7\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.422239 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd1984c3-c561-48d8-8e99-a596088b25b7" (UID: "cd1984c3-c561-48d8-8e99-a596088b25b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.429265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf" (OuterVolumeSpecName: "kube-api-access-76rdf") pod "cd1984c3-c561-48d8-8e99-a596088b25b7" (UID: "cd1984c3-c561-48d8-8e99-a596088b25b7"). InnerVolumeSpecName "kube-api-access-76rdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.522346 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"244815ff-89c6-49ac-91e1-4d8f44de6066\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.522607 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"244815ff-89c6-49ac-91e1-4d8f44de6066\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.523207 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.523248 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.523263 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "244815ff-89c6-49ac-91e1-4d8f44de6066" (UID: "244815ff-89c6-49ac-91e1-4d8f44de6066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.526590 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6" (OuterVolumeSpecName: "kube-api-access-z6lz6") pod "244815ff-89c6-49ac-91e1-4d8f44de6066" (UID: "244815ff-89c6-49ac-91e1-4d8f44de6066"). InnerVolumeSpecName "kube-api-access-z6lz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.624484 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.624514 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.882283 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerDied","Data":"0d25bcb42e6f27051a89d31c544da700a7cc3453f311a25ece7eec6ac94cf26c"} Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.882325 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d25bcb42e6f27051a89d31c544da700a7cc3453f311a25ece7eec6ac94cf26c" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.882488 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.892285 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerDied","Data":"6515d8e233a2fa628ba23c75b309f7db000a664e3978a0df9d408d36f4c87c75"} Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.892338 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6515d8e233a2fa628ba23c75b309f7db000a664e3978a0df9d408d36f4c87c75" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.892409 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.201278 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:08:11 crc kubenswrapper[4979]: E0130 23:08:11.201811 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerName="mariadb-account-create-update" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.201830 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerName="mariadb-account-create-update" Jan 30 23:08:11 crc kubenswrapper[4979]: E0130 23:08:11.201851 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerName="mariadb-database-create" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.201859 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerName="mariadb-database-create" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.202093 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerName="mariadb-account-create-update" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.202111 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerName="mariadb-database-create" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.202898 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.210180 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.210613 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.211017 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.212084 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.228528 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.361434 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.361566 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.361596 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.465552 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.465769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.465919 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.470364 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.472795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.485541 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.568123 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.059112 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:08:12 crc kubenswrapper[4979]: W0130 23:08:12.063454 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e90fa06_119c_454e_9f4e_da0b5bff99bb.slice/crio-4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7 WatchSource:0}: Error finding container 4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7: Status 404 returned error can't find the container with id 4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7 Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.918278 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerStarted","Data":"14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d"} Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.918798 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerStarted","Data":"4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7"} Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.951075 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9lbrp" podStartSLOduration=1.9510125120000001 podStartE2EDuration="1.951012512s" podCreationTimestamp="2026-01-30 23:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:12.944312041 +0000 UTC m=+5288.905559074" watchObservedRunningTime="2026-01-30 23:08:12.951012512 +0000 UTC m=+5288.912259585" Jan 30 23:08:13 crc kubenswrapper[4979]: I0130 23:08:13.070404 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:08:13 crc kubenswrapper[4979]: E0130 23:08:13.070777 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:08:13 crc kubenswrapper[4979]: I0130 23:08:13.932819 4979 generic.go:334] "Generic (PLEG): container finished" podID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerID="14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d" exitCode=0 Jan 30 23:08:13 crc kubenswrapper[4979]: I0130 23:08:13.932865 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerDied","Data":"14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d"} Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.323975 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.438702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.439090 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.439281 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.447357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l" (OuterVolumeSpecName: "kube-api-access-sln8l") pod "7e90fa06-119c-454e-9f4e-da0b5bff99bb" (UID: "7e90fa06-119c-454e-9f4e-da0b5bff99bb"). InnerVolumeSpecName "kube-api-access-sln8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.462473 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e90fa06-119c-454e-9f4e-da0b5bff99bb" (UID: "7e90fa06-119c-454e-9f4e-da0b5bff99bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.491352 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data" (OuterVolumeSpecName: "config-data") pod "7e90fa06-119c-454e-9f4e-da0b5bff99bb" (UID: "7e90fa06-119c-454e-9f4e-da0b5bff99bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.540968 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.541007 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.541019 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.959291 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerDied","Data":"4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7"} Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.959351 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.959373 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.234256 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:08:16 crc kubenswrapper[4979]: E0130 23:08:16.234814 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerName="keystone-db-sync" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.234849 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerName="keystone-db-sync" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.235165 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerName="keystone-db-sync" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.243357 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.247095 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.267392 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.268503 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271133 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271462 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271590 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271905 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.272014 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.297826 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355302 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355387 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355418 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355441 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355462 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355484 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355522 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355549 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355574 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355613 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355637 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.486977 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487531 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487578 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487621 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487686 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487735 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487842 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487980 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.488702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.488880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.488904 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.493087 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.493143 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.494867 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.495749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.500059 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.509848 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.523737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.562782 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.595652 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.055485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:08:17 crc kubenswrapper[4979]: W0130 23:08:17.060907 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36900afe_d3cd_4b93_8d7c_3d8d3a38f4f7.slice/crio-cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5 WatchSource:0}: Error finding container cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5: Status 404 returned error can't find the container with id cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5 Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.118416 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.992217 4979 generic.go:334] "Generic (PLEG): container finished" podID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerID="681ef7059193b0717b0eb969706fa681ca26f969cea9f506cb0573eaef292ba8" exitCode=0 Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.992380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerDied","Data":"681ef7059193b0717b0eb969706fa681ca26f969cea9f506cb0573eaef292ba8"} Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.992678 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerStarted","Data":"cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5"} Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.996009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerStarted","Data":"b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350"} Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.996218 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerStarted","Data":"1c326a0b6026f7ae1eb8390c0b32e6b2fadd8e2f8a86f71035bee50a5ca7340a"} Jan 30 23:08:18 crc kubenswrapper[4979]: I0130 23:08:18.066169 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mdlw5" podStartSLOduration=2.066144563 podStartE2EDuration="2.066144563s" podCreationTimestamp="2026-01-30 23:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:18.058821816 +0000 UTC m=+5294.020068869" watchObservedRunningTime="2026-01-30 23:08:18.066144563 +0000 UTC m=+5294.027391606" Jan 30 23:08:19 crc kubenswrapper[4979]: I0130 23:08:19.005637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerStarted","Data":"99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3"} Jan 30 23:08:19 crc kubenswrapper[4979]: I0130 23:08:19.036574 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7457648489-f9xxs" podStartSLOduration=3.036539798 podStartE2EDuration="3.036539798s" podCreationTimestamp="2026-01-30 23:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:19.026262912 +0000 UTC m=+5294.987509965" watchObservedRunningTime="2026-01-30 23:08:19.036539798 +0000 UTC m=+5294.997786841" Jan 30 23:08:20 crc kubenswrapper[4979]: I0130 23:08:20.013289 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:21 crc kubenswrapper[4979]: I0130 23:08:21.030180 4979 generic.go:334] "Generic (PLEG): container finished" podID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerID="b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350" exitCode=0 Jan 30 23:08:21 crc kubenswrapper[4979]: I0130 23:08:21.031364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerDied","Data":"b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350"} Jan 30 23:08:21 crc kubenswrapper[4979]: I0130 23:08:21.082674 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.393132 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.496849 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.496970 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497110 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497192 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497231 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497338 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.505570 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.506137 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts" (OuterVolumeSpecName: "scripts") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.506384 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.509314 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d" (OuterVolumeSpecName: "kube-api-access-lnd4d") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "kube-api-access-lnd4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.528235 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.530920 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data" (OuterVolumeSpecName: "config-data") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599531 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599565 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599575 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599586 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599596 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599607 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.066731 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerDied","Data":"1c326a0b6026f7ae1eb8390c0b32e6b2fadd8e2f8a86f71035bee50a5ca7340a"} Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.066799 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c326a0b6026f7ae1eb8390c0b32e6b2fadd8e2f8a86f71035bee50a5ca7340a" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.066948 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.146338 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.152361 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.237059 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:08:23 crc kubenswrapper[4979]: E0130 23:08:23.237461 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerName="keystone-bootstrap" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.237482 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerName="keystone-bootstrap" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.237692 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerName="keystone-bootstrap" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.238406 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.241240 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.241312 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.241573 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.242605 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.246156 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.251374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413481 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413878 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413928 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.414077 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515917 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515942 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515967 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.516094 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.523282 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.526858 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.527084 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.527024 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.527391 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.537378 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.567706 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.808453 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:08:23 crc kubenswrapper[4979]: W0130 23:08:23.812669 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d5aa2c0_69c0_486f_8bf7_0f7539935f2e.slice/crio-567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c WatchSource:0}: Error finding container 567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c: Status 404 returned error can't find the container with id 567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c Jan 30 23:08:24 crc kubenswrapper[4979]: I0130 23:08:24.076168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerStarted","Data":"146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4"} Jan 30 23:08:24 crc kubenswrapper[4979]: I0130 23:08:24.076217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerStarted","Data":"567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c"} Jan 30 23:08:24 crc kubenswrapper[4979]: I0130 23:08:24.095057 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n2mf2" podStartSLOduration=1.095019185 podStartE2EDuration="1.095019185s" podCreationTimestamp="2026-01-30 23:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:24.091140991 +0000 UTC m=+5300.052388024" watchObservedRunningTime="2026-01-30 23:08:24.095019185 +0000 UTC m=+5300.056266218" Jan 30 23:08:25 crc kubenswrapper[4979]: I0130 23:08:25.086916 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" path="/var/lib/kubelet/pods/fa9355be-183f-4e09-9ffd-50d0be690e6c/volumes" Jan 30 23:08:26 crc kubenswrapper[4979]: I0130 23:08:26.564414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:26 crc kubenswrapper[4979]: I0130 23:08:26.641844 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:08:26 crc kubenswrapper[4979]: I0130 23:08:26.642207 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" containerID="cri-o://00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" gracePeriod=10 Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.070688 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:08:27 crc kubenswrapper[4979]: E0130 23:08:27.071611 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.087218 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115373 4979 generic.go:334] "Generic (PLEG): container finished" podID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" exitCode=0 Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerDied","Data":"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a"} Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115488 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerDied","Data":"d2099c11bd8a88809e6380887ce5ad437e53d8e142c196904be1e3882261f67b"} Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115493 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115507 4979 scope.go:117] "RemoveContainer" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.121496 4979 generic.go:334] "Generic (PLEG): container finished" podID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerID="146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4" exitCode=0 Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.121541 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerDied","Data":"146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4"} Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.143114 4979 scope.go:117] "RemoveContainer" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.166329 4979 scope.go:117] "RemoveContainer" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" Jan 30 23:08:27 crc kubenswrapper[4979]: E0130 23:08:27.166934 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a\": container with ID starting with 00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a not found: ID does not exist" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.166999 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a"} err="failed to get container status \"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a\": rpc error: code = NotFound desc = could not find container \"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a\": container with ID starting with 00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a not found: ID does not exist" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.167124 4979 scope.go:117] "RemoveContainer" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" Jan 30 23:08:27 crc kubenswrapper[4979]: E0130 23:08:27.167892 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4\": container with ID starting with eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4 not found: ID does not exist" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.167931 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4"} err="failed to get container status \"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4\": rpc error: code = NotFound desc = could not find container \"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4\": container with ID starting with eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4 not found: ID does not exist" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182556 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182762 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182801 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.189874 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd" (OuterVolumeSpecName: "kube-api-access-b7zpd") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "kube-api-access-b7zpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.223124 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.223773 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config" (OuterVolumeSpecName: "config") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.227458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.231567 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284842 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284880 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284892 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284906 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284916 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.461191 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.467533 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.442756 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612305 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612364 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612531 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612551 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612572 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.619174 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.619733 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk" (OuterVolumeSpecName: "kube-api-access-tptjk") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "kube-api-access-tptjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.620571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.620882 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts" (OuterVolumeSpecName: "scripts") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.637382 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.638742 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data" (OuterVolumeSpecName: "config-data") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714239 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714270 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714279 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714289 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714297 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714306 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.082492 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" path="/var/lib/kubelet/pods/4dbc7280-e667-4d13-b0a0-eb654db2900a/volumes" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.147230 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerDied","Data":"567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c"} Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.147275 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.147340 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230003 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b988cf8cf-m4gbb"] Jan 30 23:08:29 crc kubenswrapper[4979]: E0130 23:08:29.230405 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="init" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230426 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="init" Jan 30 23:08:29 crc kubenswrapper[4979]: E0130 23:08:29.230439 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230446 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" Jan 30 23:08:29 crc kubenswrapper[4979]: E0130 23:08:29.230464 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerName="keystone-bootstrap" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230472 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerName="keystone-bootstrap" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230654 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerName="keystone-bootstrap" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230680 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.231311 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.234338 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.236816 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.238178 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.249241 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.255427 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b988cf8cf-m4gbb"] Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322383 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-fernet-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322440 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-scripts\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322627 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-credential-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322678 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-combined-ca-bundle\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdjf\" (UniqueName: \"kubernetes.io/projected/564a9679-372a-47bb-be3d-70b37a775724-kube-api-access-zkdjf\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-config-data\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425151 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-credential-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425233 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-combined-ca-bundle\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkdjf\" (UniqueName: \"kubernetes.io/projected/564a9679-372a-47bb-be3d-70b37a775724-kube-api-access-zkdjf\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425325 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-config-data\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-fernet-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-scripts\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.431970 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-fernet-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.432653 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-scripts\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.435906 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-credential-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.440916 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-combined-ca-bundle\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.444693 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-config-data\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.446369 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkdjf\" (UniqueName: \"kubernetes.io/projected/564a9679-372a-47bb-be3d-70b37a775724-kube-api-access-zkdjf\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.562478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:30 crc kubenswrapper[4979]: I0130 23:08:30.059209 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b988cf8cf-m4gbb"] Jan 30 23:08:30 crc kubenswrapper[4979]: I0130 23:08:30.160991 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b988cf8cf-m4gbb" event={"ID":"564a9679-372a-47bb-be3d-70b37a775724","Type":"ContainerStarted","Data":"91c8eaf2402a26626f1c2f111bfae28e4f1d7961f5e02f60f0dd36c3bd52cbb9"} Jan 30 23:08:31 crc kubenswrapper[4979]: I0130 23:08:31.170699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b988cf8cf-m4gbb" event={"ID":"564a9679-372a-47bb-be3d-70b37a775724","Type":"ContainerStarted","Data":"82ee1bef91f2dd8cb7a728d8f6ae1c5fa842daac7ddcc8e00624e33af28702c2"} Jan 30 23:08:31 crc kubenswrapper[4979]: I0130 23:08:31.171226 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:39 crc kubenswrapper[4979]: I0130 23:08:39.070202 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:08:40 crc kubenswrapper[4979]: I0130 23:08:40.252772 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19"} Jan 30 23:08:40 crc kubenswrapper[4979]: I0130 23:08:40.275231 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b988cf8cf-m4gbb" podStartSLOduration=11.275216486 podStartE2EDuration="11.275216486s" podCreationTimestamp="2026-01-30 23:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:31.195071433 +0000 UTC m=+5307.156318486" watchObservedRunningTime="2026-01-30 23:08:40.275216486 +0000 UTC m=+5316.236463519" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.381105 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.384228 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.420634 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.543051 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.543128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.543389 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.644682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.644775 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.644836 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.645673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.646548 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.670065 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.705384 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.212778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.447615 4979 generic.go:334] "Generic (PLEG): container finished" podID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" exitCode=0 Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.447845 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51"} Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.447999 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerStarted","Data":"03bb9ff3f2f98999776322287e8c0747d6142a3c313efb59c7290e9380d4d5a5"} Jan 30 23:09:01 crc kubenswrapper[4979]: I0130 23:09:01.166340 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:09:02 crc kubenswrapper[4979]: I0130 23:09:02.467314 4979 generic.go:334] "Generic (PLEG): container finished" podID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" exitCode=0 Jan 30 23:09:02 crc kubenswrapper[4979]: I0130 23:09:02.467426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0"} Jan 30 23:09:03 crc kubenswrapper[4979]: I0130 23:09:03.476477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerStarted","Data":"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea"} Jan 30 23:09:03 crc kubenswrapper[4979]: I0130 23:09:03.498826 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wxqb" podStartSLOduration=2.054148251 podStartE2EDuration="4.498804731s" podCreationTimestamp="2026-01-30 23:08:59 +0000 UTC" firstStartedPulling="2026-01-30 23:09:00.44940348 +0000 UTC m=+5336.410650513" lastFinishedPulling="2026-01-30 23:09:02.89405995 +0000 UTC m=+5338.855306993" observedRunningTime="2026-01-30 23:09:03.497161337 +0000 UTC m=+5339.458408390" watchObservedRunningTime="2026-01-30 23:09:03.498804731 +0000 UTC m=+5339.460051774" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.644333 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.646318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.651728 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.652150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.652175 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-nx46z" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.662480 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.761989 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.762164 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.762284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.863541 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.863714 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.863767 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.865909 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.870416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.885774 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.992575 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:09:06 crc kubenswrapper[4979]: I0130 23:09:06.455323 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:09:06 crc kubenswrapper[4979]: I0130 23:09:06.500702 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e","Type":"ContainerStarted","Data":"9b35f115458eae51c09b989e0ed88002066967bab95f54ae46481d8d55d31f85"} Jan 30 23:09:07 crc kubenswrapper[4979]: I0130 23:09:07.510975 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e","Type":"ContainerStarted","Data":"9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464"} Jan 30 23:09:07 crc kubenswrapper[4979]: I0130 23:09:07.530154 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.530125488 podStartE2EDuration="2.530125488s" podCreationTimestamp="2026-01-30 23:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:09:07.529174842 +0000 UTC m=+5343.490421915" watchObservedRunningTime="2026-01-30 23:09:07.530125488 +0000 UTC m=+5343.491372551" Jan 30 23:09:09 crc kubenswrapper[4979]: I0130 23:09:09.705993 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:09 crc kubenswrapper[4979]: I0130 23:09:09.706503 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:09 crc kubenswrapper[4979]: I0130 23:09:09.791152 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:10 crc kubenswrapper[4979]: I0130 23:09:10.587677 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:10 crc kubenswrapper[4979]: I0130 23:09:10.652650 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:12 crc kubenswrapper[4979]: I0130 23:09:12.556967 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wxqb" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" containerID="cri-o://9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" gracePeriod=2 Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.105500 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.222911 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"79df5709-b60b-4860-bd40-f6a7192e3ddd\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.223118 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"79df5709-b60b-4860-bd40-f6a7192e3ddd\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.223228 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"79df5709-b60b-4860-bd40-f6a7192e3ddd\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.224706 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities" (OuterVolumeSpecName: "utilities") pod "79df5709-b60b-4860-bd40-f6a7192e3ddd" (UID: "79df5709-b60b-4860-bd40-f6a7192e3ddd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.230210 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv" (OuterVolumeSpecName: "kube-api-access-nqmlv") pod "79df5709-b60b-4860-bd40-f6a7192e3ddd" (UID: "79df5709-b60b-4860-bd40-f6a7192e3ddd"). InnerVolumeSpecName "kube-api-access-nqmlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.268956 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79df5709-b60b-4860-bd40-f6a7192e3ddd" (UID: "79df5709-b60b-4860-bd40-f6a7192e3ddd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.325988 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.326076 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.326090 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566403 4979 generic.go:334] "Generic (PLEG): container finished" podID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" exitCode=0 Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea"} Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566467 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"03bb9ff3f2f98999776322287e8c0747d6142a3c313efb59c7290e9380d4d5a5"} Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566504 4979 scope.go:117] "RemoveContainer" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.597084 4979 scope.go:117] "RemoveContainer" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.606742 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.613286 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.628836 4979 scope.go:117] "RemoveContainer" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.658754 4979 scope.go:117] "RemoveContainer" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" Jan 30 23:09:13 crc kubenswrapper[4979]: E0130 23:09:13.659841 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea\": container with ID starting with 9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea not found: ID does not exist" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.659892 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea"} err="failed to get container status \"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea\": rpc error: code = NotFound desc = could not find container \"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea\": container with ID starting with 9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea not found: ID does not exist" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.659927 4979 scope.go:117] "RemoveContainer" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" Jan 30 23:09:13 crc kubenswrapper[4979]: E0130 23:09:13.660728 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0\": container with ID starting with a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0 not found: ID does not exist" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.660760 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0"} err="failed to get container status \"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0\": rpc error: code = NotFound desc = could not find container \"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0\": container with ID starting with a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0 not found: ID does not exist" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.660779 4979 scope.go:117] "RemoveContainer" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" Jan 30 23:09:13 crc kubenswrapper[4979]: E0130 23:09:13.661380 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51\": container with ID starting with 0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51 not found: ID does not exist" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.661444 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51"} err="failed to get container status \"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51\": rpc error: code = NotFound desc = could not find container \"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51\": container with ID starting with 0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51 not found: ID does not exist" Jan 30 23:09:15 crc kubenswrapper[4979]: I0130 23:09:15.087992 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" path="/var/lib/kubelet/pods/79df5709-b60b-4860-bd40-f6a7192e3ddd/volumes" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.063477 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:25 crc kubenswrapper[4979]: E0130 23:09:25.073484 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-utilities" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.073534 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-utilities" Jan 30 23:09:25 crc kubenswrapper[4979]: E0130 23:09:25.073575 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.073590 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" Jan 30 23:09:25 crc kubenswrapper[4979]: E0130 23:09:25.073607 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-content" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.073627 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-content" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.074553 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.079596 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.098847 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.249919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.250022 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.250172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.351563 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.351677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.352114 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.352194 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.352352 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.378335 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.422551 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.668737 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:26 crc kubenswrapper[4979]: I0130 23:09:26.681532 4979 generic.go:334] "Generic (PLEG): container finished" podID="13efd321-46d8-41f8-9424-6d43e957fe88" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" exitCode=0 Jan 30 23:09:26 crc kubenswrapper[4979]: I0130 23:09:26.681580 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e"} Jan 30 23:09:26 crc kubenswrapper[4979]: I0130 23:09:26.681855 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerStarted","Data":"e461e1ee73828224c73664045ee848b5b639e5074f228fad5cfc8f69411cf7bb"} Jan 30 23:09:27 crc kubenswrapper[4979]: I0130 23:09:27.690836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerStarted","Data":"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed"} Jan 30 23:09:28 crc kubenswrapper[4979]: I0130 23:09:28.704487 4979 generic.go:334] "Generic (PLEG): container finished" podID="13efd321-46d8-41f8-9424-6d43e957fe88" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" exitCode=0 Jan 30 23:09:28 crc kubenswrapper[4979]: I0130 23:09:28.704585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed"} Jan 30 23:09:29 crc kubenswrapper[4979]: I0130 23:09:29.723443 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerStarted","Data":"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996"} Jan 30 23:09:29 crc kubenswrapper[4979]: I0130 23:09:29.756466 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fm7qh" podStartSLOduration=3.291667935 podStartE2EDuration="5.756442785s" podCreationTimestamp="2026-01-30 23:09:24 +0000 UTC" firstStartedPulling="2026-01-30 23:09:26.682634497 +0000 UTC m=+5362.643881530" lastFinishedPulling="2026-01-30 23:09:29.147409337 +0000 UTC m=+5365.108656380" observedRunningTime="2026-01-30 23:09:29.750578107 +0000 UTC m=+5365.711825140" watchObservedRunningTime="2026-01-30 23:09:29.756442785 +0000 UTC m=+5365.717689808" Jan 30 23:09:35 crc kubenswrapper[4979]: I0130 23:09:35.423372 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:35 crc kubenswrapper[4979]: I0130 23:09:35.424454 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:36 crc kubenswrapper[4979]: I0130 23:09:36.473425 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm7qh" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" probeResult="failure" output=< Jan 30 23:09:36 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 23:09:36 crc kubenswrapper[4979]: > Jan 30 23:09:45 crc kubenswrapper[4979]: I0130 23:09:45.469649 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:45 crc kubenswrapper[4979]: I0130 23:09:45.540626 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:45 crc kubenswrapper[4979]: I0130 23:09:45.712431 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:46 crc kubenswrapper[4979]: I0130 23:09:46.877332 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fm7qh" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" containerID="cri-o://ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" gracePeriod=2 Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.353742 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.458522 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"13efd321-46d8-41f8-9424-6d43e957fe88\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.458690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"13efd321-46d8-41f8-9424-6d43e957fe88\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.458748 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"13efd321-46d8-41f8-9424-6d43e957fe88\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.465247 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l" (OuterVolumeSpecName: "kube-api-access-fl62l") pod "13efd321-46d8-41f8-9424-6d43e957fe88" (UID: "13efd321-46d8-41f8-9424-6d43e957fe88"). InnerVolumeSpecName "kube-api-access-fl62l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.494827 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities" (OuterVolumeSpecName: "utilities") pod "13efd321-46d8-41f8-9424-6d43e957fe88" (UID: "13efd321-46d8-41f8-9424-6d43e957fe88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.561597 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.561656 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.647232 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13efd321-46d8-41f8-9424-6d43e957fe88" (UID: "13efd321-46d8-41f8-9424-6d43e957fe88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.662615 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887615 4979 generic.go:334] "Generic (PLEG): container finished" podID="13efd321-46d8-41f8-9424-6d43e957fe88" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" exitCode=0 Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887690 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996"} Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887702 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"e461e1ee73828224c73664045ee848b5b639e5074f228fad5cfc8f69411cf7bb"} Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887871 4979 scope.go:117] "RemoveContainer" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.922882 4979 scope.go:117] "RemoveContainer" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.934525 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.938021 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.954272 4979 scope.go:117] "RemoveContainer" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.004971 4979 scope.go:117] "RemoveContainer" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" Jan 30 23:09:48 crc kubenswrapper[4979]: E0130 23:09:48.005339 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996\": container with ID starting with ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996 not found: ID does not exist" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005402 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996"} err="failed to get container status \"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996\": rpc error: code = NotFound desc = could not find container \"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996\": container with ID starting with ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996 not found: ID does not exist" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005429 4979 scope.go:117] "RemoveContainer" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" Jan 30 23:09:48 crc kubenswrapper[4979]: E0130 23:09:48.005760 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed\": container with ID starting with 4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed not found: ID does not exist" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005826 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed"} err="failed to get container status \"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed\": rpc error: code = NotFound desc = could not find container \"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed\": container with ID starting with 4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed not found: ID does not exist" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005880 4979 scope.go:117] "RemoveContainer" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" Jan 30 23:09:48 crc kubenswrapper[4979]: E0130 23:09:48.006232 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e\": container with ID starting with 773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e not found: ID does not exist" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.006290 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e"} err="failed to get container status \"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e\": rpc error: code = NotFound desc = could not find container \"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e\": container with ID starting with 773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e not found: ID does not exist" Jan 30 23:09:49 crc kubenswrapper[4979]: I0130 23:09:49.080096 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" path="/var/lib/kubelet/pods/13efd321-46d8-41f8-9424-6d43e957fe88/volumes" Jan 30 23:10:37 crc kubenswrapper[4979]: I0130 23:10:37.087534 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:10:37 crc kubenswrapper[4979]: I0130 23:10:37.088169 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:10:39 crc kubenswrapper[4979]: I0130 23:10:39.086767 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579619ae-df83-40ff-8580-331060c16faf" path="/var/lib/kubelet/pods/579619ae-df83-40ff-8580-331060c16faf/volumes" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.415319 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:10:43 crc kubenswrapper[4979]: E0130 23:10:43.416065 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-content" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416083 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-content" Jan 30 23:10:43 crc kubenswrapper[4979]: E0130 23:10:43.416094 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416101 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" Jan 30 23:10:43 crc kubenswrapper[4979]: E0130 23:10:43.416114 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-utilities" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416124 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-utilities" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416314 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416970 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.425081 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.426324 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.428600 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.436115 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.442399 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.543932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.544050 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.544074 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.544312 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.645770 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.646203 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.646263 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.646289 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.647127 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.647208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.667061 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.669174 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.736263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.793219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.213588 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.301558 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:10:44 crc kubenswrapper[4979]: W0130 23:10:44.303121 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5bf2d6f_952e_4cec_938b_e1d00042c3ad.slice/crio-06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d WatchSource:0}: Error finding container 06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d: Status 404 returned error can't find the container with id 06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.404096 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-088a-account-create-update-gl7pk" event={"ID":"c5bf2d6f-952e-4cec-938b-e1d00042c3ad","Type":"ContainerStarted","Data":"06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d"} Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.405628 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mdk2v" event={"ID":"800775b4-f78f-4f2f-9d21-4dd42458db2b","Type":"ContainerStarted","Data":"d3369dffd50838ab28f0e6ede24b2ca0bfde61c3882d7e9db2200f77057e58a0"} Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.420212 4979 generic.go:334] "Generic (PLEG): container finished" podID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerID="46d964a0839cd8efea2510cfac9bc323533200f0741e6142ba6a532c576e85b4" exitCode=0 Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.420281 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mdk2v" event={"ID":"800775b4-f78f-4f2f-9d21-4dd42458db2b","Type":"ContainerDied","Data":"46d964a0839cd8efea2510cfac9bc323533200f0741e6142ba6a532c576e85b4"} Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.425216 4979 generic.go:334] "Generic (PLEG): container finished" podID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerID="088e2e7d854d7dc05cd4dbe8fe4c7ffcbdee731d873f6f602ab10d8c9fb6c170" exitCode=0 Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.425275 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-088a-account-create-update-gl7pk" event={"ID":"c5bf2d6f-952e-4cec-938b-e1d00042c3ad","Type":"ContainerDied","Data":"088e2e7d854d7dc05cd4dbe8fe4c7ffcbdee731d873f6f602ab10d8c9fb6c170"} Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.771797 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.789778 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.915958 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"800775b4-f78f-4f2f-9d21-4dd42458db2b\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.916018 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.916078 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"800775b4-f78f-4f2f-9d21-4dd42458db2b\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.916243 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.917530 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "800775b4-f78f-4f2f-9d21-4dd42458db2b" (UID: "800775b4-f78f-4f2f-9d21-4dd42458db2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.917571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5bf2d6f-952e-4cec-938b-e1d00042c3ad" (UID: "c5bf2d6f-952e-4cec-938b-e1d00042c3ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.923018 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz" (OuterVolumeSpecName: "kube-api-access-xhzzz") pod "800775b4-f78f-4f2f-9d21-4dd42458db2b" (UID: "800775b4-f78f-4f2f-9d21-4dd42458db2b"). InnerVolumeSpecName "kube-api-access-xhzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.923344 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc" (OuterVolumeSpecName: "kube-api-access-rkprc") pod "c5bf2d6f-952e-4cec-938b-e1d00042c3ad" (UID: "c5bf2d6f-952e-4cec-938b-e1d00042c3ad"). InnerVolumeSpecName "kube-api-access-rkprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018152 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018183 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018196 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018209 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.444352 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mdk2v" event={"ID":"800775b4-f78f-4f2f-9d21-4dd42458db2b","Type":"ContainerDied","Data":"d3369dffd50838ab28f0e6ede24b2ca0bfde61c3882d7e9db2200f77057e58a0"} Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.444410 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3369dffd50838ab28f0e6ede24b2ca0bfde61c3882d7e9db2200f77057e58a0" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.444596 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.448351 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.448890 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-088a-account-create-update-gl7pk" event={"ID":"c5bf2d6f-952e-4cec-938b-e1d00042c3ad","Type":"ContainerDied","Data":"06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d"} Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.449058 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.642978 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:10:48 crc kubenswrapper[4979]: E0130 23:10:48.643857 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerName="mariadb-account-create-update" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.643873 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerName="mariadb-account-create-update" Jan 30 23:10:48 crc kubenswrapper[4979]: E0130 23:10:48.643889 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerName="mariadb-database-create" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.643895 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerName="mariadb-database-create" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.644196 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerName="mariadb-account-create-update" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.644229 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerName="mariadb-database-create" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.644930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.647357 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fpkxv" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.648859 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.653076 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.750330 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.750428 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.750466 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.852738 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.852802 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.852922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.861007 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.861312 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.874571 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.973231 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:49 crc kubenswrapper[4979]: I0130 23:10:49.411160 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:10:49 crc kubenswrapper[4979]: I0130 23:10:49.480034 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerStarted","Data":"2e1f28ff649507ef80db11e5de7bbb5433150df8b46080c23b4af930f6e46fe6"} Jan 30 23:10:50 crc kubenswrapper[4979]: I0130 23:10:50.487499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerStarted","Data":"6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6"} Jan 30 23:10:50 crc kubenswrapper[4979]: I0130 23:10:50.505324 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qpzjk" podStartSLOduration=2.505308687 podStartE2EDuration="2.505308687s" podCreationTimestamp="2026-01-30 23:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:50.498936035 +0000 UTC m=+5446.460183068" watchObservedRunningTime="2026-01-30 23:10:50.505308687 +0000 UTC m=+5446.466555720" Jan 30 23:10:51 crc kubenswrapper[4979]: I0130 23:10:51.497397 4979 generic.go:334] "Generic (PLEG): container finished" podID="338244cb-adb6-4402-ba74-378f70078ebd" containerID="6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6" exitCode=0 Jan 30 23:10:51 crc kubenswrapper[4979]: I0130 23:10:51.497441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerDied","Data":"6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6"} Jan 30 23:10:52 crc kubenswrapper[4979]: I0130 23:10:52.936214 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.042077 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"338244cb-adb6-4402-ba74-378f70078ebd\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.042298 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"338244cb-adb6-4402-ba74-378f70078ebd\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.042352 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"338244cb-adb6-4402-ba74-378f70078ebd\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.049514 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "338244cb-adb6-4402-ba74-378f70078ebd" (UID: "338244cb-adb6-4402-ba74-378f70078ebd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.049703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq" (OuterVolumeSpecName: "kube-api-access-bmngq") pod "338244cb-adb6-4402-ba74-378f70078ebd" (UID: "338244cb-adb6-4402-ba74-378f70078ebd"). InnerVolumeSpecName "kube-api-access-bmngq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.074572 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "338244cb-adb6-4402-ba74-378f70078ebd" (UID: "338244cb-adb6-4402-ba74-378f70078ebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.144670 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.144712 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.144725 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.519958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerDied","Data":"2e1f28ff649507ef80db11e5de7bbb5433150df8b46080c23b4af930f6e46fe6"} Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.520341 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e1f28ff649507ef80db11e5de7bbb5433150df8b46080c23b4af930f6e46fe6" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.520050 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.691743 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c85d579b5-svwjh"] Jan 30 23:10:53 crc kubenswrapper[4979]: E0130 23:10:53.692128 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338244cb-adb6-4402-ba74-378f70078ebd" containerName="barbican-db-sync" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.692149 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="338244cb-adb6-4402-ba74-378f70078ebd" containerName="barbican-db-sync" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.692351 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="338244cb-adb6-4402-ba74-378f70078ebd" containerName="barbican-db-sync" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.693490 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.695417 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.697809 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fpkxv" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.698218 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.707691 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c85d579b5-svwjh"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754114 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd72817a-eff0-4fac-ba2b-040115385897-logs\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754182 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-combined-ca-bundle\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754417 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data-custom\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754500 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754666 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nntg\" (UniqueName: \"kubernetes.io/projected/fd72817a-eff0-4fac-ba2b-040115385897-kube-api-access-5nntg\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.815201 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.816997 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.833525 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7ff7d98446-pts46"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.835019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.838242 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.845958 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.854809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ff7d98446-pts46"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855690 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd72817a-eff0-4fac-ba2b-040115385897-logs\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855728 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-combined-ca-bundle\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855793 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855829 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855855 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data-custom\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855882 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855914 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nntg\" (UniqueName: \"kubernetes.io/projected/fd72817a-eff0-4fac-ba2b-040115385897-kube-api-access-5nntg\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.856177 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd72817a-eff0-4fac-ba2b-040115385897-logs\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.866560 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-combined-ca-bundle\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.868672 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data-custom\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.892351 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.897626 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nntg\" (UniqueName: \"kubernetes.io/projected/fd72817a-eff0-4fac-ba2b-040115385897-kube-api-access-5nntg\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.933902 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d46697d68-frccf"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.935964 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.943163 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.954448 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d46697d68-frccf"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958111 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958180 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958243 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958281 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958317 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e21af86-2d45-409c-b692-97bc60c3d806-logs\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958363 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958390 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vp2d\" (UniqueName: \"kubernetes.io/projected/0e21af86-2d45-409c-b692-97bc60c3d806-kube-api-access-2vp2d\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958423 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958449 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data-custom\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.959080 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.959462 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.959953 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.961198 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.995123 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.017014 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060490 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e21af86-2d45-409c-b692-97bc60c3d806-logs\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060860 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-combined-ca-bundle\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060889 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f76ba6-bd87-414d-b226-07f7a8705fea-logs\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vp2d\" (UniqueName: \"kubernetes.io/projected/0e21af86-2d45-409c-b692-97bc60c3d806-kube-api-access-2vp2d\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data-custom\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061017 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061101 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8lt\" (UniqueName: \"kubernetes.io/projected/58f76ba6-bd87-414d-b226-07f7a8705fea-kube-api-access-fn8lt\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061150 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data-custom\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061218 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061933 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e21af86-2d45-409c-b692-97bc60c3d806-logs\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.067592 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data-custom\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.070458 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.079930 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.080392 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vp2d\" (UniqueName: \"kubernetes.io/projected/0e21af86-2d45-409c-b692-97bc60c3d806-kube-api-access-2vp2d\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.134919 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.153652 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.167414 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.168818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8lt\" (UniqueName: \"kubernetes.io/projected/58f76ba6-bd87-414d-b226-07f7a8705fea-kube-api-access-fn8lt\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.169371 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data-custom\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.170267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-combined-ca-bundle\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.170338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f76ba6-bd87-414d-b226-07f7a8705fea-logs\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.171400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f76ba6-bd87-414d-b226-07f7a8705fea-logs\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.174676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data-custom\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.175527 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.183710 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-combined-ca-bundle\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.186590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8lt\" (UniqueName: \"kubernetes.io/projected/58f76ba6-bd87-414d-b226-07f7a8705fea-kube-api-access-fn8lt\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.263263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.493713 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c85d579b5-svwjh"] Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.529322 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c85d579b5-svwjh" event={"ID":"fd72817a-eff0-4fac-ba2b-040115385897","Type":"ContainerStarted","Data":"ef96517a818a235b7731c6fe8e7babffd3efd15c42b1646d202d6a8b588429d0"} Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.649788 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:10:54 crc kubenswrapper[4979]: W0130 23:10:54.653956 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf646c27_e12e_47e1_b540_6f37012f4f48.slice/crio-c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37 WatchSource:0}: Error finding container c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37: Status 404 returned error can't find the container with id c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37 Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.940477 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ff7d98446-pts46"] Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.948622 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d46697d68-frccf"] Jan 30 23:10:54 crc kubenswrapper[4979]: W0130 23:10:54.964140 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e21af86_2d45_409c_b692_97bc60c3d806.slice/crio-9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b WatchSource:0}: Error finding container 9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b: Status 404 returned error can't find the container with id 9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.538712 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" event={"ID":"0e21af86-2d45-409c-b692-97bc60c3d806","Type":"ContainerStarted","Data":"e49efcd48eea953dbbd9840b65e722e4f15136fec9106c84099e41a796d2dbad"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.538762 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" event={"ID":"0e21af86-2d45-409c-b692-97bc60c3d806","Type":"ContainerStarted","Data":"522eb023b7f322ca244a6d80b07aa9c52f273a7ff2be4ad9dba2a2f6be024a9d"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.538774 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" event={"ID":"0e21af86-2d45-409c-b692-97bc60c3d806","Type":"ContainerStarted","Data":"9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.540787 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c85d579b5-svwjh" event={"ID":"fd72817a-eff0-4fac-ba2b-040115385897","Type":"ContainerStarted","Data":"59ea1ac095ff1d17f585dc428ba7608bbdb9e409cd5604db36f8018d15758212"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.540847 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c85d579b5-svwjh" event={"ID":"fd72817a-eff0-4fac-ba2b-040115385897","Type":"ContainerStarted","Data":"fdbce8fa2f629b54856b8da8d6ff1943f78d20c4f11ea88b09129c2115d93b27"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542651 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d46697d68-frccf" event={"ID":"58f76ba6-bd87-414d-b226-07f7a8705fea","Type":"ContainerStarted","Data":"98e80f23f06c248467a2a0795206707914c7aa181e1dab69b59e6edd13acfd54"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d46697d68-frccf" event={"ID":"58f76ba6-bd87-414d-b226-07f7a8705fea","Type":"ContainerStarted","Data":"e34e1ba4c4794ef5bae79690f36965d1e0d16deb645eea9dff77d7d2205f0623"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542689 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d46697d68-frccf" event={"ID":"58f76ba6-bd87-414d-b226-07f7a8705fea","Type":"ContainerStarted","Data":"dbb1fdb38b494cfca275674ddd8d6d350e061f6778781c86373847ee87a9a560"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542859 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.544551 4979 generic.go:334] "Generic (PLEG): container finished" podID="af646c27-e12e-47e1-b540-6f37012f4f48" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" exitCode=0 Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.544592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerDied","Data":"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.544707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerStarted","Data":"c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.564024 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" podStartSLOduration=2.56400063 podStartE2EDuration="2.56400063s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:55.560900557 +0000 UTC m=+5451.522147590" watchObservedRunningTime="2026-01-30 23:10:55.56400063 +0000 UTC m=+5451.525247663" Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.580604 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d46697d68-frccf" podStartSLOduration=2.580584506 podStartE2EDuration="2.580584506s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:55.576500826 +0000 UTC m=+5451.537747859" watchObservedRunningTime="2026-01-30 23:10:55.580584506 +0000 UTC m=+5451.541831539" Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.635574 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c85d579b5-svwjh" podStartSLOduration=2.635554684 podStartE2EDuration="2.635554684s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:55.624694512 +0000 UTC m=+5451.585941545" watchObservedRunningTime="2026-01-30 23:10:55.635554684 +0000 UTC m=+5451.596801717" Jan 30 23:10:56 crc kubenswrapper[4979]: I0130 23:10:56.556828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerStarted","Data":"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea"} Jan 30 23:10:56 crc kubenswrapper[4979]: I0130 23:10:56.557828 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:56 crc kubenswrapper[4979]: I0130 23:10:56.557851 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:11:02 crc kubenswrapper[4979]: I0130 23:11:02.039845 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:11:02 crc kubenswrapper[4979]: I0130 23:11:02.040786 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.138334 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.174796 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7968668d89-w7l26" podStartSLOduration=11.174763133 podStartE2EDuration="11.174763133s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:56.590143294 +0000 UTC m=+5452.551390327" watchObservedRunningTime="2026-01-30 23:11:04.174763133 +0000 UTC m=+5460.136010176" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.217989 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.220113 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7457648489-f9xxs" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" containerID="cri-o://99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3" gracePeriod=10 Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.641374 4979 generic.go:334] "Generic (PLEG): container finished" podID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerID="99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3" exitCode=0 Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.641453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerDied","Data":"99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3"} Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.738078 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810323 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810495 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810532 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810642 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810673 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.818544 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6" (OuterVolumeSpecName: "kube-api-access-kckg6") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "kube-api-access-kckg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.855284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config" (OuterVolumeSpecName: "config") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.856140 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.870209 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.877237 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914811 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914883 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914909 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914933 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914957 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.650249 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerDied","Data":"cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5"} Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.650647 4979 scope.go:117] "RemoveContainer" containerID="99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.650895 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.673116 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.679932 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.684049 4979 scope.go:117] "RemoveContainer" containerID="681ef7059193b0717b0eb969706fa681ca26f969cea9f506cb0573eaef292ba8" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.757836 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.846010 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:11:07 crc kubenswrapper[4979]: I0130 23:11:07.081094 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" path="/var/lib/kubelet/pods/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7/volumes" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.499993 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:11:17 crc kubenswrapper[4979]: E0130 23:11:17.500939 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="init" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.500956 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="init" Jan 30 23:11:17 crc kubenswrapper[4979]: E0130 23:11:17.500980 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.500989 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.501208 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.501894 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.516355 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.604584 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.605909 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.609904 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.617449 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.671673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.671769 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772664 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772803 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.773810 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.802993 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.819568 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.874359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.874837 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.875600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.899721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.927573 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.341266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.410396 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:11:18 crc kubenswrapper[4979]: W0130 23:11:18.415424 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb871a72e_a648_4c40_b5eb_604c75307e21.slice/crio-a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb WatchSource:0}: Error finding container a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb: Status 404 returned error can't find the container with id a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.779381 4979 generic.go:334] "Generic (PLEG): container finished" podID="b871a72e-a648-4c40-b5eb-604c75307e21" containerID="6488fa6f75b07a884cb1c9e243ae1419c47f4f31507eee906fc6e83084e37e42" exitCode=0 Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.779719 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575-account-create-update-hrq7w" event={"ID":"b871a72e-a648-4c40-b5eb-604c75307e21","Type":"ContainerDied","Data":"6488fa6f75b07a884cb1c9e243ae1419c47f4f31507eee906fc6e83084e37e42"} Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.779753 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575-account-create-update-hrq7w" event={"ID":"b871a72e-a648-4c40-b5eb-604c75307e21","Type":"ContainerStarted","Data":"a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb"} Jan 30 23:11:18 crc kubenswrapper[4979]: E0130 23:11:18.781315 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4a95_be2f_4c0d_a789_f7505dcdfd97.slice/crio-7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4a95_be2f_4c0d_a789_f7505dcdfd97.slice/crio-conmon-7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.782796 4979 generic.go:334] "Generic (PLEG): container finished" podID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerID="7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5" exitCode=0 Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.782823 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2ltc5" event={"ID":"b92c4a95-be2f-4c0d-a789-f7505dcdfd97","Type":"ContainerDied","Data":"7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5"} Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.782864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2ltc5" event={"ID":"b92c4a95-be2f-4c0d-a789-f7505dcdfd97","Type":"ContainerStarted","Data":"850c228c9585efa67c6390fd7f34afcf8ba7838076a007fbe5a90b9d03314299"} Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.142572 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.229086 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.241940 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.242078 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.244147 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b92c4a95-be2f-4c0d-a789-f7505dcdfd97" (UID: "b92c4a95-be2f-4c0d-a789-f7505dcdfd97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.245136 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.252698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx" (OuterVolumeSpecName: "kube-api-access-vxbwx") pod "b92c4a95-be2f-4c0d-a789-f7505dcdfd97" (UID: "b92c4a95-be2f-4c0d-a789-f7505dcdfd97"). InnerVolumeSpecName "kube-api-access-vxbwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346188 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"b871a72e-a648-4c40-b5eb-604c75307e21\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346424 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"b871a72e-a648-4c40-b5eb-604c75307e21\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346672 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b871a72e-a648-4c40-b5eb-604c75307e21" (UID: "b871a72e-a648-4c40-b5eb-604c75307e21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346804 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346826 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.349376 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948" (OuterVolumeSpecName: "kube-api-access-7m948") pod "b871a72e-a648-4c40-b5eb-604c75307e21" (UID: "b871a72e-a648-4c40-b5eb-604c75307e21"). InnerVolumeSpecName "kube-api-access-7m948". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.447860 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.803597 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.803640 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575-account-create-update-hrq7w" event={"ID":"b871a72e-a648-4c40-b5eb-604c75307e21","Type":"ContainerDied","Data":"a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb"} Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.803696 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.806898 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2ltc5" event={"ID":"b92c4a95-be2f-4c0d-a789-f7505dcdfd97","Type":"ContainerDied","Data":"850c228c9585efa67c6390fd7f34afcf8ba7838076a007fbe5a90b9d03314299"} Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.806954 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850c228c9585efa67c6390fd7f34afcf8ba7838076a007fbe5a90b9d03314299" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.806969 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742150 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:11:22 crc kubenswrapper[4979]: E0130 23:11:22.742748 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerName="mariadb-database-create" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742759 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerName="mariadb-database-create" Jan 30 23:11:22 crc kubenswrapper[4979]: E0130 23:11:22.742792 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" containerName="mariadb-account-create-update" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742798 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" containerName="mariadb-account-create-update" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742947 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" containerName="mariadb-account-create-update" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742960 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerName="mariadb-database-create" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.743552 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.749971 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.750549 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gzwzm" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.750804 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.756075 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.890788 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.890904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.890932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.992799 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.992912 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.992941 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.998731 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.999445 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.015798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.085584 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.562927 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.843722 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerStarted","Data":"75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134"} Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.845192 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerStarted","Data":"572fe0142177b6810aeae3d7ced70d935d0c5b5c45be8044697b9d3495773b65"} Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.862976 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2w7hf" podStartSLOduration=1.8629562179999999 podStartE2EDuration="1.862956218s" podCreationTimestamp="2026-01-30 23:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:11:23.858745235 +0000 UTC m=+5479.819992268" watchObservedRunningTime="2026-01-30 23:11:23.862956218 +0000 UTC m=+5479.824203261" Jan 30 23:11:26 crc kubenswrapper[4979]: I0130 23:11:26.608270 4979 scope.go:117] "RemoveContainer" containerID="2d0a143830dd73a91f1cb09ef9f3967be5ae0e4eb61c252cb0405d7e7fe00ec4" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.392226 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.399341 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.416263 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.483279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.483438 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.483490 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.584746 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.584804 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.584864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.585655 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.585934 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.616718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.783227 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.887592 4979 generic.go:334] "Generic (PLEG): container finished" podID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerID="75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134" exitCode=0 Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.887664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerDied","Data":"75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134"} Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.290369 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.900686 4979 generic.go:334] "Generic (PLEG): container finished" podID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" exitCode=0 Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.901247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872"} Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.901290 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerStarted","Data":"43983c64979bfec7b92346854621cf6924983a165cc6f7fb14746e77bc6dda46"} Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.252974 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.431979 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.432381 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.432641 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.443306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp" (OuterVolumeSpecName: "kube-api-access-96vcp") pod "fc87a0f7-9b2b-46ce-a000-c1c5195535d8" (UID: "fc87a0f7-9b2b-46ce-a000-c1c5195535d8"). InnerVolumeSpecName "kube-api-access-96vcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.453872 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config" (OuterVolumeSpecName: "config") pod "fc87a0f7-9b2b-46ce-a000-c1c5195535d8" (UID: "fc87a0f7-9b2b-46ce-a000-c1c5195535d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.457353 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc87a0f7-9b2b-46ce-a000-c1c5195535d8" (UID: "fc87a0f7-9b2b-46ce-a000-c1c5195535d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.535267 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.535295 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.535308 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.913800 4979 generic.go:334] "Generic (PLEG): container finished" podID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" exitCode=0 Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.913926 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da"} Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.916202 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerDied","Data":"572fe0142177b6810aeae3d7ced70d935d0c5b5c45be8044697b9d3495773b65"} Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.916233 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572fe0142177b6810aeae3d7ced70d935d0c5b5c45be8044697b9d3495773b65" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.916243 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.055475 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:11:30 crc kubenswrapper[4979]: E0130 23:11:30.056080 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerName="neutron-db-sync" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.056127 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerName="neutron-db-sync" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.056269 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerName="neutron-db-sync" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.057198 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.072663 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.134834 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-998b6c5dc-s8h29"] Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.136500 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.140392 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gzwzm" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.140402 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.140501 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148488 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148579 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148734 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148764 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.153449 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-998b6c5dc-s8h29"] Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250272 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250343 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-combined-ca-bundle\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250392 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250414 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.251381 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.251395 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.251425 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.252441 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.253083 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.253427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.253785 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-httpd-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.254113 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkxb\" (UniqueName: \"kubernetes.io/projected/633158e6-5d40-43e2-a2c9-94e611b32d3c-kube-api-access-xmkxb\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.254154 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.278199 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356750 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-httpd-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356832 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkxb\" (UniqueName: \"kubernetes.io/projected/633158e6-5d40-43e2-a2c9-94e611b32d3c-kube-api-access-xmkxb\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356863 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-combined-ca-bundle\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.360327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-httpd-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.360742 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-combined-ca-bundle\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.366222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.375769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkxb\" (UniqueName: \"kubernetes.io/projected/633158e6-5d40-43e2-a2c9-94e611b32d3c-kube-api-access-xmkxb\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.386310 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.452350 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.682842 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:11:30 crc kubenswrapper[4979]: W0130 23:11:30.688053 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e29f1a4_dca0_42b8_8ee9_e040433dad76.slice/crio-4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3 WatchSource:0}: Error finding container 4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3: Status 404 returned error can't find the container with id 4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3 Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.926574 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerStarted","Data":"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb"} Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.928679 4979 generic.go:334] "Generic (PLEG): container finished" podID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerID="70e7a3e289c9bede605a4d28f895b056899de6dff342f7658a2ae4deec0c89ae" exitCode=0 Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.928796 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerDied","Data":"70e7a3e289c9bede605a4d28f895b056899de6dff342f7658a2ae4deec0c89ae"} Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.928885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerStarted","Data":"4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3"} Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.960897 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jfl5" podStartSLOduration=2.561466347 podStartE2EDuration="3.960878789s" podCreationTimestamp="2026-01-30 23:11:27 +0000 UTC" firstStartedPulling="2026-01-30 23:11:28.904737347 +0000 UTC m=+5484.865984380" lastFinishedPulling="2026-01-30 23:11:30.304149789 +0000 UTC m=+5486.265396822" observedRunningTime="2026-01-30 23:11:30.953665765 +0000 UTC m=+5486.914912798" watchObservedRunningTime="2026-01-30 23:11:30.960878789 +0000 UTC m=+5486.922125822" Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.116122 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-998b6c5dc-s8h29"] Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-998b6c5dc-s8h29" event={"ID":"633158e6-5d40-43e2-a2c9-94e611b32d3c","Type":"ContainerStarted","Data":"63a6cbc50456cd54d75b72df942bd53620222a8e491b9fd4f175d83f073eb9ca"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937781 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937801 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-998b6c5dc-s8h29" event={"ID":"633158e6-5d40-43e2-a2c9-94e611b32d3c","Type":"ContainerStarted","Data":"7fbca4c6e87321c1e1bd6191d243f2aa3ef7c9d61539e92cf7b446c886753606"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937816 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-998b6c5dc-s8h29" event={"ID":"633158e6-5d40-43e2-a2c9-94e611b32d3c","Type":"ContainerStarted","Data":"c2026a78dc728c6cb76aaa32fb1d71ec9beb7127f311151b50da5fb60fde77dd"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.939634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerStarted","Data":"1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.939947 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.956901 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-998b6c5dc-s8h29" podStartSLOduration=1.956876092 podStartE2EDuration="1.956876092s" podCreationTimestamp="2026-01-30 23:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:11:31.953148822 +0000 UTC m=+5487.914395865" watchObservedRunningTime="2026-01-30 23:11:31.956876092 +0000 UTC m=+5487.918123125" Jan 30 23:11:32 crc kubenswrapper[4979]: I0130 23:11:32.039753 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:11:32 crc kubenswrapper[4979]: I0130 23:11:32.039805 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.783712 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.785566 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.912085 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.958007 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" podStartSLOduration=7.957984538 podStartE2EDuration="7.957984538s" podCreationTimestamp="2026-01-30 23:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:11:31.976230773 +0000 UTC m=+5487.937477796" watchObservedRunningTime="2026-01-30 23:11:37.957984538 +0000 UTC m=+5493.919231571" Jan 30 23:11:38 crc kubenswrapper[4979]: I0130 23:11:38.048567 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:38 crc kubenswrapper[4979]: I0130 23:11:38.438606 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.007836 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jfl5" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" containerID="cri-o://8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" gracePeriod=2 Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.388170 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.441133 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.441426 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7968668d89-w7l26" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" containerID="cri-o://f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" gracePeriod=10 Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.465867 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.563700 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.563813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.563948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.565410 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities" (OuterVolumeSpecName: "utilities") pod "712bebd9-29c5-4d26-b254-b7d1dfdb8292" (UID: "712bebd9-29c5-4d26-b254-b7d1dfdb8292"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.573784 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f" (OuterVolumeSpecName: "kube-api-access-lpl7f") pod "712bebd9-29c5-4d26-b254-b7d1dfdb8292" (UID: "712bebd9-29c5-4d26-b254-b7d1dfdb8292"). InnerVolumeSpecName "kube-api-access-lpl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.595648 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "712bebd9-29c5-4d26-b254-b7d1dfdb8292" (UID: "712bebd9-29c5-4d26-b254-b7d1dfdb8292"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.665566 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.665602 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.665614 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.854420 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970102 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970218 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970296 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970355 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.974161 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj" (OuterVolumeSpecName: "kube-api-access-h9zhj") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "kube-api-access-h9zhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.027186 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.027535 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028498 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config" (OuterVolumeSpecName: "config") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028494 4979 generic.go:334] "Generic (PLEG): container finished" podID="af646c27-e12e-47e1-b540-6f37012f4f48" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" exitCode=0 Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028521 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerDied","Data":"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028556 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerDied","Data":"c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028579 4979 scope.go:117] "RemoveContainer" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028602 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032409 4979 generic.go:334] "Generic (PLEG): container finished" podID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" exitCode=0 Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032444 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"43983c64979bfec7b92346854621cf6924983a165cc6f7fb14746e77bc6dda46"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032519 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.033616 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.062162 4979 scope.go:117] "RemoveContainer" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082784 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082811 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082820 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082830 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082839 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.087727 4979 scope.go:117] "RemoveContainer" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.088141 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea\": container with ID starting with f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea not found: ID does not exist" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088186 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea"} err="failed to get container status \"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea\": rpc error: code = NotFound desc = could not find container \"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea\": container with ID starting with f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088208 4979 scope.go:117] "RemoveContainer" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.088415 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40\": container with ID starting with 2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40 not found: ID does not exist" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088439 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40"} err="failed to get container status \"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40\": rpc error: code = NotFound desc = could not find container \"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40\": container with ID starting with 2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40 not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088455 4979 scope.go:117] "RemoveContainer" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.100467 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.100514 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.116871 4979 scope.go:117] "RemoveContainer" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.180434 4979 scope.go:117] "RemoveContainer" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.210875 4979 scope.go:117] "RemoveContainer" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.211357 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb\": container with ID starting with 8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb not found: ID does not exist" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211406 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb"} err="failed to get container status \"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb\": rpc error: code = NotFound desc = could not find container \"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb\": container with ID starting with 8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211432 4979 scope.go:117] "RemoveContainer" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.211749 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da\": container with ID starting with 37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da not found: ID does not exist" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211779 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da"} err="failed to get container status \"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da\": rpc error: code = NotFound desc = could not find container \"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da\": container with ID starting with 37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211801 4979 scope.go:117] "RemoveContainer" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.211994 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872\": container with ID starting with c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872 not found: ID does not exist" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.212014 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872"} err="failed to get container status \"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872\": rpc error: code = NotFound desc = could not find container \"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872\": container with ID starting with c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872 not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.351641 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.358383 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:11:43 crc kubenswrapper[4979]: I0130 23:11:43.085208 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" path="/var/lib/kubelet/pods/712bebd9-29c5-4d26-b254-b7d1dfdb8292/volumes" Jan 30 23:11:43 crc kubenswrapper[4979]: I0130 23:11:43.087083 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" path="/var/lib/kubelet/pods/af646c27-e12e-47e1-b540-6f37012f4f48/volumes" Jan 30 23:12:00 crc kubenswrapper[4979]: I0130 23:12:00.462818 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.039718 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.040489 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.040564 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.041661 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.041801 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19" gracePeriod=600 Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.383071 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19" exitCode=0 Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.383163 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19"} Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.383206 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:12:03 crc kubenswrapper[4979]: I0130 23:12:03.393791 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f"} Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227134 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227919 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227932 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227948 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227954 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227975 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-utilities" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227982 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-utilities" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227995 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="init" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228000 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="init" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.228013 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-content" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228019 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-content" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228208 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228237 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228794 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.244237 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.327193 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.328225 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.331582 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.337636 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.356162 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.356487 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.457931 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.457989 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.458047 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.458106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.458753 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.476491 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.550430 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.559674 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.559745 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.560603 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.578739 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.652498 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.016480 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:12:08 crc kubenswrapper[4979]: W0130 23:12:08.024322 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90346f0c_7cc3_4f3c_a29f_9b7265eff703.slice/crio-3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221 WatchSource:0}: Error finding container 3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221: Status 404 returned error can't find the container with id 3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221 Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.134661 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:12:08 crc kubenswrapper[4979]: W0130 23:12:08.136554 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59dad3f6_f4ce_4ce7_8364_044694d448f1.slice/crio-8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39 WatchSource:0}: Error finding container 8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39: Status 404 returned error can't find the container with id 8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39 Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.434348 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerStarted","Data":"1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.434405 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerStarted","Data":"8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.437939 4979 generic.go:334] "Generic (PLEG): container finished" podID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerID="22f97911fc2dfbe2d7800553503f0c8338bac7f33443e8a617f5b406e5bdc412" exitCode=0 Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.437983 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ds8kf" event={"ID":"90346f0c-7cc3-4f3c-a29f-9b7265eff703","Type":"ContainerDied","Data":"22f97911fc2dfbe2d7800553503f0c8338bac7f33443e8a617f5b406e5bdc412"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.438009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ds8kf" event={"ID":"90346f0c-7cc3-4f3c-a29f-9b7265eff703","Type":"ContainerStarted","Data":"3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.452361 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-dc54-account-create-update-qv7gj" podStartSLOduration=1.452334702 podStartE2EDuration="1.452334702s" podCreationTimestamp="2026-01-30 23:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:08.450624825 +0000 UTC m=+5524.411871858" watchObservedRunningTime="2026-01-30 23:12:08.452334702 +0000 UTC m=+5524.413581735" Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.447689 4979 generic.go:334] "Generic (PLEG): container finished" podID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerID="1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7" exitCode=0 Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.447851 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerDied","Data":"1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7"} Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.842281 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.904473 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.904608 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.905502 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90346f0c-7cc3-4f3c-a29f-9b7265eff703" (UID: "90346f0c-7cc3-4f3c-a29f-9b7265eff703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.910824 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p" (OuterVolumeSpecName: "kube-api-access-8kw5p") pod "90346f0c-7cc3-4f3c-a29f-9b7265eff703" (UID: "90346f0c-7cc3-4f3c-a29f-9b7265eff703"). InnerVolumeSpecName "kube-api-access-8kw5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.007556 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.007589 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.462149 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.462097 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ds8kf" event={"ID":"90346f0c-7cc3-4f3c-a29f-9b7265eff703","Type":"ContainerDied","Data":"3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221"} Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.462239 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.821425 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.822128 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"59dad3f6-f4ce-4ce7-8364-044694d448f1\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.835645 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d" (OuterVolumeSpecName: "kube-api-access-hlq8d") pod "59dad3f6-f4ce-4ce7-8364-044694d448f1" (UID: "59dad3f6-f4ce-4ce7-8364-044694d448f1"). InnerVolumeSpecName "kube-api-access-hlq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.924112 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"59dad3f6-f4ce-4ce7-8364-044694d448f1\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.924700 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.924913 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59dad3f6-f4ce-4ce7-8364-044694d448f1" (UID: "59dad3f6-f4ce-4ce7-8364-044694d448f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.026089 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.474171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerDied","Data":"8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39"} Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.474538 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39" Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.474225 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.586721 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:12:12 crc kubenswrapper[4979]: E0130 23:12:12.587087 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerName="mariadb-database-create" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587100 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerName="mariadb-database-create" Jan 30 23:12:12 crc kubenswrapper[4979]: E0130 23:12:12.587116 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerName="mariadb-account-create-update" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerName="mariadb-account-create-update" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587314 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerName="mariadb-database-create" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587339 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerName="mariadb-account-create-update" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587949 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.590674 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.590699 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92r4q" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658112 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.661105 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760308 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760385 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760461 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.765901 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.768411 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.779135 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.781737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.906316 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:13 crc kubenswrapper[4979]: I0130 23:12:13.440104 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:12:13 crc kubenswrapper[4979]: I0130 23:12:13.495277 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerStarted","Data":"ae8a116b6fb783c60e7fb64f62534ab45339b2fca0b155394852d5289ad5a6fa"} Jan 30 23:12:14 crc kubenswrapper[4979]: I0130 23:12:14.502838 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerStarted","Data":"6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7"} Jan 30 23:12:17 crc kubenswrapper[4979]: I0130 23:12:17.528272 4979 generic.go:334] "Generic (PLEG): container finished" podID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerID="6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7" exitCode=0 Jan 30 23:12:17 crc kubenswrapper[4979]: I0130 23:12:17.528347 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerDied","Data":"6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7"} Jan 30 23:12:18 crc kubenswrapper[4979]: I0130 23:12:18.993683 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080594 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080757 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080830 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080935 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.085782 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.086548 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2" (OuterVolumeSpecName: "kube-api-access-fm6d2") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "kube-api-access-fm6d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.102852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.128348 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data" (OuterVolumeSpecName: "config-data") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183473 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183511 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183544 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183557 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.562247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerDied","Data":"ae8a116b6fb783c60e7fb64f62534ab45339b2fca0b155394852d5289ad5a6fa"} Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.562359 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8a116b6fb783c60e7fb64f62534ab45339b2fca0b155394852d5289ad5a6fa" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.562293 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.940842 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:12:19 crc kubenswrapper[4979]: E0130 23:12:19.941642 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerName="glance-db-sync" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.941665 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerName="glance-db-sync" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.941884 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerName="glance-db-sync" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.942998 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.957015 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005960 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005987 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.006054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.006151 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.007955 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.023609 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.023631 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.023903 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92r4q" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.024643 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.045010 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108650 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108701 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108755 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108777 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108815 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108933 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108997 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.109054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.109091 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.110378 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.110424 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.113003 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.115934 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.125793 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.127372 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.135937 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.147046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.147533 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210435 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210576 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210606 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210636 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210667 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210718 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210810 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210992 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211155 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211158 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211296 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211354 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.215486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.216251 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.216440 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.228698 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.228776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.275434 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.314209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.314482 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.314518 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.316386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.316870 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317326 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317878 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.318512 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.320185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.320305 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.320669 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.333805 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.338215 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.482083 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.801142 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.941415 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.288212 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:21 crc kubenswrapper[4979]: W0130 23:12:21.292559 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd22d21d5_bd0b_4ad6_bd03_d024ff808850.slice/crio-16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4 WatchSource:0}: Error finding container 16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4: Status 404 returned error can't find the container with id 16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4 Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.379528 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.602995 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerID="ce550ab1c6e408aea10d06173b7920d5c55fe0078943da671c3598da2665ca61" exitCode=0 Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.603144 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerDied","Data":"ce550ab1c6e408aea10d06173b7920d5c55fe0078943da671c3598da2665ca61"} Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.603229 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerStarted","Data":"e0a67ae163b7249d5be022ae19e45164ea2e40a675fe276f1793defd9586ab15"} Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.608642 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerStarted","Data":"ea71b32de48e4a49d45264b0aa2f381b0548149ec8a0db018450e3df0c20f8ae"} Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.609894 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerStarted","Data":"16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.618995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerStarted","Data":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.619727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerStarted","Data":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.620648 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerStarted","Data":"949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.620750 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623116 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerStarted","Data":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623140 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerStarted","Data":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623219 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" containerID="cri-o://81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" gracePeriod=30 Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623271 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" containerID="cri-o://6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" gracePeriod=30 Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.652966 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.652941171 podStartE2EDuration="2.652941171s" podCreationTimestamp="2026-01-30 23:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:22.646196619 +0000 UTC m=+5538.607443642" watchObservedRunningTime="2026-01-30 23:12:22.652941171 +0000 UTC m=+5538.614188204" Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.671206 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" podStartSLOduration=3.671190511 podStartE2EDuration="3.671190511s" podCreationTimestamp="2026-01-30 23:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:22.665673203 +0000 UTC m=+5538.626920236" watchObservedRunningTime="2026-01-30 23:12:22.671190511 +0000 UTC m=+5538.632437544" Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.699159 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.699137633 podStartE2EDuration="3.699137633s" podCreationTimestamp="2026-01-30 23:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:22.689672888 +0000 UTC m=+5538.650919941" watchObservedRunningTime="2026-01-30 23:12:22.699137633 +0000 UTC m=+5538.660384666" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.189134 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281314 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281347 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281400 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281433 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281514 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281549 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.283465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs" (OuterVolumeSpecName: "logs") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.283920 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.288506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts" (OuterVolumeSpecName: "scripts") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.288549 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph" (OuterVolumeSpecName: "ceph") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.289463 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc" (OuterVolumeSpecName: "kube-api-access-zlbnc") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "kube-api-access-zlbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.328328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.342628 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data" (OuterVolumeSpecName: "config-data") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.382994 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383041 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383054 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383064 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383071 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383079 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383088 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.426338 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632269 4979 generic.go:334] "Generic (PLEG): container finished" podID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" exitCode=0 Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632306 4979 generic.go:334] "Generic (PLEG): container finished" podID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" exitCode=143 Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632334 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerDied","Data":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerDied","Data":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerDied","Data":"ea71b32de48e4a49d45264b0aa2f381b0548149ec8a0db018450e3df0c20f8ae"} Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632435 4979 scope.go:117] "RemoveContainer" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.703463 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.708616 4979 scope.go:117] "RemoveContainer" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.715683 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.773967 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.774666 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774689 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.774724 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774730 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774941 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774955 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.783226 4979 scope.go:117] "RemoveContainer" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.784924 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.789485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.795221 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.796395 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": container with ID starting with 6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370 not found: ID does not exist" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.796457 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} err="failed to get container status \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": rpc error: code = NotFound desc = could not find container \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": container with ID starting with 6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.796488 4979 scope.go:117] "RemoveContainer" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.796961 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": container with ID starting with 81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9 not found: ID does not exist" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.796994 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} err="failed to get container status \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": rpc error: code = NotFound desc = could not find container \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": container with ID starting with 81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797056 4979 scope.go:117] "RemoveContainer" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797389 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} err="failed to get container status \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": rpc error: code = NotFound desc = could not find container \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": container with ID starting with 6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797409 4979 scope.go:117] "RemoveContainer" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797691 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} err="failed to get container status \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": rpc error: code = NotFound desc = could not find container \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": container with ID starting with 81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.918730 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.918785 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.918815 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919075 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919453 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919554 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021693 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021751 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021833 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021862 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021901 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.022398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.023217 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.026616 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.028083 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.029776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.029857 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.040360 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.118322 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.641729 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" containerID="cri-o://4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" gracePeriod=30 Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.641792 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" containerID="cri-o://8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" gracePeriod=30 Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.767647 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:24 crc kubenswrapper[4979]: W0130 23:12:24.803096 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eceabd7_12d5_42b8_9add_f89801459249.slice/crio-2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a WatchSource:0}: Error finding container 2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a: Status 404 returned error can't find the container with id 2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.090699 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" path="/var/lib/kubelet/pods/b6723bda-cb31-4951-b243-9a358b8e65f0/volumes" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.313727 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.444916 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445423 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445491 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445534 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445888 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445959 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.449092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.449158 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs" (OuterVolumeSpecName: "logs") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.451006 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4" (OuterVolumeSpecName: "kube-api-access-tkqs4") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "kube-api-access-tkqs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.451632 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph" (OuterVolumeSpecName: "ceph") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.457612 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts" (OuterVolumeSpecName: "scripts") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.494235 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.520460 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data" (OuterVolumeSpecName: "config-data") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550422 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550467 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550485 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550528 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550542 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550553 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550565 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654399 4979 generic.go:334] "Generic (PLEG): container finished" podID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" exitCode=0 Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654442 4979 generic.go:334] "Generic (PLEG): container finished" podID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" exitCode=143 Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654472 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerDied","Data":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654672 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerDied","Data":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654687 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerDied","Data":"16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654741 4979 scope.go:117] "RemoveContainer" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.658620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerStarted","Data":"82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.658652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerStarted","Data":"2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.693385 4979 scope.go:117] "RemoveContainer" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.701480 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.731211 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.744968 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.745424 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745440 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.745461 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745467 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745626 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745643 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.746965 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.752564 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.754053 4979 scope.go:117] "RemoveContainer" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.754281 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.757809 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": container with ID starting with 8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5 not found: ID does not exist" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.757865 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} err="failed to get container status \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": rpc error: code = NotFound desc = could not find container \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": container with ID starting with 8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.757891 4979 scope.go:117] "RemoveContainer" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.758522 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": container with ID starting with 4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555 not found: ID does not exist" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758541 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} err="failed to get container status \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": rpc error: code = NotFound desc = could not find container \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": container with ID starting with 4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758556 4979 scope.go:117] "RemoveContainer" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758858 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} err="failed to get container status \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": rpc error: code = NotFound desc = could not find container \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": container with ID starting with 8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758871 4979 scope.go:117] "RemoveContainer" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.759371 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} err="failed to get container status \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": rpc error: code = NotFound desc = could not find container \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": container with ID starting with 4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855729 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855872 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855901 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855994 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.856068 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957593 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957719 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957766 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957783 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957800 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957819 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.958788 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.958853 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.963216 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.963757 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.969128 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.969263 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.976379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.061741 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.580026 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.687471 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerStarted","Data":"028416f98f1589f42948c16d708a30eb83a748a9cb41317ffc40f3e850e93529"} Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.695264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerStarted","Data":"55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065"} Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.739227 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.739203444 podStartE2EDuration="3.739203444s" podCreationTimestamp="2026-01-30 23:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:26.730606443 +0000 UTC m=+5542.691853486" watchObservedRunningTime="2026-01-30 23:12:26.739203444 +0000 UTC m=+5542.700450477" Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.084951 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" path="/var/lib/kubelet/pods/d22d21d5-bd0b-4ad6-bd03-d024ff808850/volumes" Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.704437 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerStarted","Data":"4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232"} Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.704754 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerStarted","Data":"b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545"} Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.729436 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.729411552 podStartE2EDuration="2.729411552s" podCreationTimestamp="2026-01-30 23:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:27.722796145 +0000 UTC m=+5543.684043178" watchObservedRunningTime="2026-01-30 23:12:27.729411552 +0000 UTC m=+5543.690658585" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.277978 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.354631 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.354890 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" containerID="cri-o://1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4" gracePeriod=10 Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.386739 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.40:5353: connect: connection refused" Jan 30 23:12:30 crc kubenswrapper[4979]: E0130 23:12:30.511152 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e29f1a4_dca0_42b8_8ee9_e040433dad76.slice/crio-conmon-1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e29f1a4_dca0_42b8_8ee9_e040433dad76.slice/crio-1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.750729 4979 generic.go:334] "Generic (PLEG): container finished" podID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerID="1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4" exitCode=0 Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.751104 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerDied","Data":"1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4"} Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.863802 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.968718 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.968773 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.968826 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.969719 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.969815 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.975121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f" (OuterVolumeSpecName: "kube-api-access-8qx8f") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "kube-api-access-8qx8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.017796 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.027878 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.030385 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config" (OuterVolumeSpecName: "config") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.032528 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072197 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072427 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072499 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072557 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072612 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.762021 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerDied","Data":"4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3"} Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.762324 4979 scope.go:117] "RemoveContainer" containerID="1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.762102 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.789227 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.799428 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.799985 4979 scope.go:117] "RemoveContainer" containerID="70e7a3e289c9bede605a4d28f895b056899de6dff342f7658a2ae4deec0c89ae" Jan 30 23:12:33 crc kubenswrapper[4979]: I0130 23:12:33.081363 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" path="/var/lib/kubelet/pods/8e29f1a4-dca0-42b8-8ee9-e040433dad76/volumes" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.119225 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.119345 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.165325 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.167293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.789847 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.789926 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.064317 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.066297 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.100170 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.114015 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.743795 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.803010 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.803651 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.803705 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.850810 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:12:38 crc kubenswrapper[4979]: I0130 23:12:38.845385 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:38 crc kubenswrapper[4979]: I0130 23:12:38.845795 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 23:12:38 crc kubenswrapper[4979]: I0130 23:12:38.869486 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.818396 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:12:44 crc kubenswrapper[4979]: E0130 23:12:44.819787 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="init" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.819805 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="init" Jan 30 23:12:44 crc kubenswrapper[4979]: E0130 23:12:44.819818 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.819824 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.821139 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.822098 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.841301 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.922513 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.923870 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.930043 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.941798 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.948720 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.948797 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050078 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050463 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050561 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.051577 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.079807 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.142222 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.152219 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.152332 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.155199 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.174574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.306385 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.612942 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:12:45 crc kubenswrapper[4979]: W0130 23:12:45.618561 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b37754_6d06_4d68_bf4b_34b553d5750e.slice/crio-c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043 WatchSource:0}: Error finding container c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043: Status 404 returned error can't find the container with id c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043 Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.791319 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.907141 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d73-account-create-update-kh7g2" event={"ID":"d7ef2a65-30bc-4af2-aa45-16b8b793359c","Type":"ContainerStarted","Data":"bf77eca2403d4cd7c920553e1d8e29a0e2f2640e4602e1da4935f3366a5776c9"} Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.910167 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerStarted","Data":"1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a"} Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.910228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerStarted","Data":"c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043"} Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.933434 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-6fj56" podStartSLOduration=1.9333985550000001 podStartE2EDuration="1.933398555s" podCreationTimestamp="2026-01-30 23:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:45.924345812 +0000 UTC m=+5561.885592855" watchObservedRunningTime="2026-01-30 23:12:45.933398555 +0000 UTC m=+5561.894645588" Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.920243 4979 generic.go:334] "Generic (PLEG): container finished" podID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerID="1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a" exitCode=0 Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.920329 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerDied","Data":"1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a"} Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.922021 4979 generic.go:334] "Generic (PLEG): container finished" podID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerID="e78c967f90d787e6a500755dd51462d00698c1a63f9294556b2308f1758c7a1f" exitCode=0 Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.922146 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d73-account-create-update-kh7g2" event={"ID":"d7ef2a65-30bc-4af2-aa45-16b8b793359c","Type":"ContainerDied","Data":"e78c967f90d787e6a500755dd51462d00698c1a63f9294556b2308f1758c7a1f"} Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.361290 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.366891 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.426978 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"26b37754-6d06-4d68-bf4b-34b553d5750e\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.427150 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.427238 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.427302 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"26b37754-6d06-4d68-bf4b-34b553d5750e\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.428042 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7ef2a65-30bc-4af2-aa45-16b8b793359c" (UID: "d7ef2a65-30bc-4af2-aa45-16b8b793359c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.428302 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26b37754-6d06-4d68-bf4b-34b553d5750e" (UID: "26b37754-6d06-4d68-bf4b-34b553d5750e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.433968 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9" (OuterVolumeSpecName: "kube-api-access-l2qf9") pod "26b37754-6d06-4d68-bf4b-34b553d5750e" (UID: "26b37754-6d06-4d68-bf4b-34b553d5750e"). InnerVolumeSpecName "kube-api-access-l2qf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.441345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws" (OuterVolumeSpecName: "kube-api-access-fbwws") pod "d7ef2a65-30bc-4af2-aa45-16b8b793359c" (UID: "d7ef2a65-30bc-4af2-aa45-16b8b793359c"). InnerVolumeSpecName "kube-api-access-fbwws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529734 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529768 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529779 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529788 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.942335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d73-account-create-update-kh7g2" event={"ID":"d7ef2a65-30bc-4af2-aa45-16b8b793359c","Type":"ContainerDied","Data":"bf77eca2403d4cd7c920553e1d8e29a0e2f2640e4602e1da4935f3366a5776c9"} Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.942682 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf77eca2403d4cd7c920553e1d8e29a0e2f2640e4602e1da4935f3366a5776c9" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.942710 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.944352 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerDied","Data":"c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043"} Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.944373 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.944406 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.248460 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:12:50 crc kubenswrapper[4979]: E0130 23:12:50.249145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerName="mariadb-database-create" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249159 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerName="mariadb-database-create" Jan 30 23:12:50 crc kubenswrapper[4979]: E0130 23:12:50.249184 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerName="mariadb-account-create-update" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249190 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerName="mariadb-account-create-update" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249359 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerName="mariadb-account-create-update" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249374 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerName="mariadb-database-create" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249906 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.253011 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.253366 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5dm54" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.253581 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.277460 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.288861 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.290267 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.301806 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371789 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371850 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371901 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371938 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371971 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372283 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372630 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372758 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475495 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475523 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475590 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475617 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475741 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475770 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477111 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477363 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477814 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477807 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.478200 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.483310 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.485185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.499628 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.500914 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.502640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.577749 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.618382 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.905133 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.997832 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerStarted","Data":"8f0344395cf6379abf8e1a3eeb1fc6ed566ba11f831b0b6dc1f945364e9dd8b9"} Jan 30 23:12:51 crc kubenswrapper[4979]: I0130 23:12:51.212920 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:12:51 crc kubenswrapper[4979]: W0130 23:12:51.218869 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c13ddd7_ca9f_4446_a482_09cf5b71ced0.slice/crio-cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d WatchSource:0}: Error finding container cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d: Status 404 returned error can't find the container with id cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.009263 4979 generic.go:334] "Generic (PLEG): container finished" podID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" exitCode=0 Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.009594 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerDied","Data":"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e"} Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.009627 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerStarted","Data":"cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d"} Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.012920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerStarted","Data":"a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520"} Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.075181 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8cb96" podStartSLOduration=2.075110212 podStartE2EDuration="2.075110212s" podCreationTimestamp="2026-01-30 23:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:52.068728401 +0000 UTC m=+5568.029975474" watchObservedRunningTime="2026-01-30 23:12:52.075110212 +0000 UTC m=+5568.036357275" Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.025818 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerID="a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520" exitCode=0 Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.025921 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerDied","Data":"a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520"} Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.030947 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerStarted","Data":"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed"} Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.032225 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.100919 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" podStartSLOduration=3.100877086 podStartE2EDuration="3.100877086s" podCreationTimestamp="2026-01-30 23:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:53.08052055 +0000 UTC m=+5569.041767633" watchObservedRunningTime="2026-01-30 23:12:53.100877086 +0000 UTC m=+5569.062124179" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.345584 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458401 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458441 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458499 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs" (OuterVolumeSpecName: "logs") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458520 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458553 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.459083 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.463991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8" (OuterVolumeSpecName: "kube-api-access-tx4d8") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "kube-api-access-tx4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.464278 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts" (OuterVolumeSpecName: "scripts") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.485870 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.487133 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data" (OuterVolumeSpecName: "config-data") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560377 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560410 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560421 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560429 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.051909 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerDied","Data":"8f0344395cf6379abf8e1a3eeb1fc6ed566ba11f831b0b6dc1f945364e9dd8b9"} Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.051978 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0344395cf6379abf8e1a3eeb1fc6ed566ba11f831b0b6dc1f945364e9dd8b9" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.051935 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.531397 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8565876748-g76rq"] Jan 30 23:12:55 crc kubenswrapper[4979]: E0130 23:12:55.532687 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerName="placement-db-sync" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.532713 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerName="placement-db-sync" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.533113 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerName="placement-db-sync" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.535199 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.538394 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5dm54" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.541735 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.545220 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586261 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j4s\" (UniqueName: \"kubernetes.io/projected/019fe9ef-3972-45a8-82ec-8b566d9a1c58-kube-api-access-l7j4s\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586326 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-combined-ca-bundle\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586361 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019fe9ef-3972-45a8-82ec-8b566d9a1c58-logs\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-config-data\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.587069 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-scripts\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.617144 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8565876748-g76rq"] Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689020 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-scripts\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689176 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j4s\" (UniqueName: \"kubernetes.io/projected/019fe9ef-3972-45a8-82ec-8b566d9a1c58-kube-api-access-l7j4s\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689227 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-combined-ca-bundle\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689289 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019fe9ef-3972-45a8-82ec-8b566d9a1c58-logs\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-config-data\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.690090 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019fe9ef-3972-45a8-82ec-8b566d9a1c58-logs\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.693958 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-scripts\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.694416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-combined-ca-bundle\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.705301 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-config-data\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.718882 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j4s\" (UniqueName: \"kubernetes.io/projected/019fe9ef-3972-45a8-82ec-8b566d9a1c58-kube-api-access-l7j4s\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.867459 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:56 crc kubenswrapper[4979]: W0130 23:12:56.328263 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod019fe9ef_3972_45a8_82ec_8b566d9a1c58.slice/crio-2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983 WatchSource:0}: Error finding container 2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983: Status 404 returned error can't find the container with id 2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983 Jan 30 23:12:56 crc kubenswrapper[4979]: I0130 23:12:56.330841 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8565876748-g76rq"] Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082310 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8565876748-g76rq" event={"ID":"019fe9ef-3972-45a8-82ec-8b566d9a1c58","Type":"ContainerStarted","Data":"a78bb4f6e4d18bbe605a604da375b4ad1adbf8aec4a8179462386250a50af04b"} Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082788 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8565876748-g76rq" event={"ID":"019fe9ef-3972-45a8-82ec-8b566d9a1c58","Type":"ContainerStarted","Data":"4dee4b2c16d1f46ffe4dc9430bee4be1a0344706c6ab2fe5a75ac0dec5f19b76"} Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082805 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082815 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8565876748-g76rq" event={"ID":"019fe9ef-3972-45a8-82ec-8b566d9a1c58","Type":"ContainerStarted","Data":"2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983"} Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.094426 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8565876748-g76rq" podStartSLOduration=2.094411097 podStartE2EDuration="2.094411097s" podCreationTimestamp="2026-01-30 23:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:57.090286086 +0000 UTC m=+5573.051533119" watchObservedRunningTime="2026-01-30 23:12:57.094411097 +0000 UTC m=+5573.055658130" Jan 30 23:12:58 crc kubenswrapper[4979]: I0130 23:12:58.077423 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8565876748-g76rq" Jan 30 23:13:00 crc kubenswrapper[4979]: I0130 23:13:00.620859 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:13:00 crc kubenswrapper[4979]: I0130 23:13:00.698551 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:13:00 crc kubenswrapper[4979]: I0130 23:13:00.698813 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" containerID="cri-o://949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899" gracePeriod=10 Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.112459 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerID="949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899" exitCode=0 Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.113185 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerDied","Data":"949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899"} Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.113327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerDied","Data":"e0a67ae163b7249d5be022ae19e45164ea2e40a675fe276f1793defd9586ab15"} Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.113422 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a67ae163b7249d5be022ae19e45164ea2e40a675fe276f1793defd9586ab15" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.172772 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310177 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310291 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310357 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310986 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.311126 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.316696 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj" (OuterVolumeSpecName: "kube-api-access-bg5lj") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "kube-api-access-bg5lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.354870 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.357414 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.360746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.370676 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config" (OuterVolumeSpecName: "config") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413598 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413634 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413644 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413656 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413667 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:02 crc kubenswrapper[4979]: I0130 23:13:02.122265 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:13:02 crc kubenswrapper[4979]: I0130 23:13:02.177649 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:13:02 crc kubenswrapper[4979]: I0130 23:13:02.191315 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:13:03 crc kubenswrapper[4979]: I0130 23:13:03.083282 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" path="/var/lib/kubelet/pods/2ab8f580-9bea-44a9-a732-ef54cf9eef47/volumes" Jan 30 23:13:26 crc kubenswrapper[4979]: I0130 23:13:26.769881 4979 scope.go:117] "RemoveContainer" containerID="dad83fe6e0dd13f90e65510d87c2454c3b37aa1abc0bae6f460d76fcaed45b7c" Jan 30 23:13:26 crc kubenswrapper[4979]: I0130 23:13:26.834667 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8565876748-g76rq" Jan 30 23:13:26 crc kubenswrapper[4979]: I0130 23:13:26.834982 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8565876748-g76rq" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.892838 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:13:50 crc kubenswrapper[4979]: E0130 23:13:50.893566 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="init" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.893578 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="init" Jan 30 23:13:50 crc kubenswrapper[4979]: E0130 23:13:50.893603 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.893609 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.893743 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.894283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.905164 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.949638 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.949688 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.993422 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.996655 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.004507 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.051999 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052127 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052939 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.075669 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.102984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.106789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.110594 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.115285 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.153295 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.153437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.154201 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.179928 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.198857 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.199952 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.209338 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.216547 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.255913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.255981 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.256063 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.256131 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.317570 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.322608 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.323667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.325846 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357777 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357828 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357875 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357925 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.358914 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.366760 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.373754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.421699 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.422186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.437733 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.466993 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.467109 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.544065 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.544790 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.546784 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.556801 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.564931 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.626252 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.626947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.628281 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.654736 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.666301 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.706311 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.729107 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.729233 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.830293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.830680 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.831476 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.848922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.924925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.928910 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:13:51 crc kubenswrapper[4979]: W0130 23:13:51.951357 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09fb7fe9_97f7_4af9_897c_e4fb6f234c79.slice/crio-f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca WatchSource:0}: Error finding container f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca: Status 404 returned error can't find the container with id f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.054186 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.121102 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.185621 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:13:52 crc kubenswrapper[4979]: W0130 23:13:52.196289 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0010c53f_b0a4_44bd_9178_bbd2941973ff.slice/crio-5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642 WatchSource:0}: Error finding container 5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642: Status 404 returned error can't find the container with id 5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642 Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.199167 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:13:52 crc kubenswrapper[4979]: W0130 23:13:52.208372 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a160f3_ed61_481d_be84_cdc6c7b6097a.slice/crio-d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc WatchSource:0}: Error finding container d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc: Status 404 returned error can't find the container with id d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.615322 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerStarted","Data":"d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.615368 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerStarted","Data":"9e8472bdcdaadebac64e1b2f96d0af2b368fb0879ca9fc61ccf0b4fcdb472597"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.618235 4979 generic.go:334] "Generic (PLEG): container finished" podID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerID="27746524c4c68ca5b766ef144aa2b7cd8bd00780eefec84e45e51a6c155cf253" exitCode=0 Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.618331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nj2pr" event={"ID":"09fb7fe9-97f7-4af9-897c-e4fb6f234c79","Type":"ContainerDied","Data":"27746524c4c68ca5b766ef144aa2b7cd8bd00780eefec84e45e51a6c155cf253"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.618383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nj2pr" event={"ID":"09fb7fe9-97f7-4af9-897c-e4fb6f234c79","Type":"ContainerStarted","Data":"f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.624550 4979 generic.go:334] "Generic (PLEG): container finished" podID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerID="3d49f76579ebce159dde4f7f8e10b1d7dd782ed39ac26b0b2a652ca85113974a" exitCode=0 Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.624744 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cxszb" event={"ID":"8f726869-e2f9-4a3b-b40a-236ad3a8566c","Type":"ContainerDied","Data":"3d49f76579ebce159dde4f7f8e10b1d7dd782ed39ac26b0b2a652ca85113974a"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.624784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cxszb" event={"ID":"8f726869-e2f9-4a3b-b40a-236ad3a8566c","Type":"ContainerStarted","Data":"ff223dea7923b35bbf27adc16d58465c7a8c2018c20ec345e5f1131825c02905"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.626425 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerStarted","Data":"dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.626479 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerStarted","Data":"5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.633591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerStarted","Data":"27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.633934 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerStarted","Data":"e7b7596bba991105ac1cdca95c1c01dcf5161ff0fa25b7bd1a44652e91891d68"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.634221 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-eff7-account-create-update-zbvkl" podStartSLOduration=1.6342102490000001 podStartE2EDuration="1.634210249s" podCreationTimestamp="2026-01-30 23:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:52.633261184 +0000 UTC m=+5628.594508227" watchObservedRunningTime="2026-01-30 23:13:52.634210249 +0000 UTC m=+5628.595457282" Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.636211 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerStarted","Data":"95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.636258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerStarted","Data":"d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.677487 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" podStartSLOduration=1.677466023 podStartE2EDuration="1.677466023s" podCreationTimestamp="2026-01-30 23:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:52.664853803 +0000 UTC m=+5628.626100846" watchObservedRunningTime="2026-01-30 23:13:52.677466023 +0000 UTC m=+5628.638713056" Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.699519 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" podStartSLOduration=1.699497315 podStartE2EDuration="1.699497315s" podCreationTimestamp="2026-01-30 23:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:52.690804392 +0000 UTC m=+5628.652051435" watchObservedRunningTime="2026-01-30 23:13:52.699497315 +0000 UTC m=+5628.660744348" Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.653729 4979 generic.go:334] "Generic (PLEG): container finished" podID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerID="27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.653788 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerDied","Data":"27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475"} Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.655970 4979 generic.go:334] "Generic (PLEG): container finished" podID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerID="95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.656010 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerDied","Data":"95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd"} Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.658509 4979 generic.go:334] "Generic (PLEG): container finished" podID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerID="d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.658569 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerDied","Data":"d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af"} Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.659905 4979 generic.go:334] "Generic (PLEG): container finished" podID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerID="dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.659952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerDied","Data":"dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.034624 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.077621 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"28312ce4-d376-4d84-9aea-175ee095e2ce\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.081094 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"28312ce4-d376-4d84-9aea-175ee095e2ce\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.082318 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28312ce4-d376-4d84-9aea-175ee095e2ce" (UID: "28312ce4-d376-4d84-9aea-175ee095e2ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.087069 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl" (OuterVolumeSpecName: "kube-api-access-djtjl") pod "28312ce4-d376-4d84-9aea-175ee095e2ce" (UID: "28312ce4-d376-4d84-9aea-175ee095e2ce"). InnerVolumeSpecName "kube-api-access-djtjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.133322 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.138864 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182259 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182372 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182441 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182465 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182902 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09fb7fe9-97f7-4af9-897c-e4fb6f234c79" (UID: "09fb7fe9-97f7-4af9-897c-e4fb6f234c79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182982 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f726869-e2f9-4a3b-b40a-236ad3a8566c" (UID: "8f726869-e2f9-4a3b-b40a-236ad3a8566c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185392 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2" (OuterVolumeSpecName: "kube-api-access-hhzv2") pod "8f726869-e2f9-4a3b-b40a-236ad3a8566c" (UID: "8f726869-e2f9-4a3b-b40a-236ad3a8566c"). InnerVolumeSpecName "kube-api-access-hhzv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.184798 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185525 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185537 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185553 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.187448 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88" (OuterVolumeSpecName: "kube-api-access-gfd88") pod "09fb7fe9-97f7-4af9-897c-e4fb6f234c79" (UID: "09fb7fe9-97f7-4af9-897c-e4fb6f234c79"). InnerVolumeSpecName "kube-api-access-gfd88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.295490 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.295541 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.673622 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cxszb" event={"ID":"8f726869-e2f9-4a3b-b40a-236ad3a8566c","Type":"ContainerDied","Data":"ff223dea7923b35bbf27adc16d58465c7a8c2018c20ec345e5f1131825c02905"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.673673 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff223dea7923b35bbf27adc16d58465c7a8c2018c20ec345e5f1131825c02905" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.673636 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.676828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerDied","Data":"e7b7596bba991105ac1cdca95c1c01dcf5161ff0fa25b7bd1a44652e91891d68"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.676871 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b7596bba991105ac1cdca95c1c01dcf5161ff0fa25b7bd1a44652e91891d68" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.676874 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.680181 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nj2pr" event={"ID":"09fb7fe9-97f7-4af9-897c-e4fb6f234c79","Type":"ContainerDied","Data":"f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.680637 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.680424 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.150641 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.166018 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.191986 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315740 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"20a160f3-ed61-481d-be84-cdc6c7b6097a\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315880 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315917 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315964 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"0010c53f-b0a4-44bd-9178-bbd2941973ff\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316001 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"20a160f3-ed61-481d-be84-cdc6c7b6097a\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316108 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"0010c53f-b0a4-44bd-9178-bbd2941973ff\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316641 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0010c53f-b0a4-44bd-9178-bbd2941973ff" (UID: "0010c53f-b0a4-44bd-9178-bbd2941973ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316735 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" (UID: "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316931 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20a160f3-ed61-481d-be84-cdc6c7b6097a" (UID: "20a160f3-ed61-481d-be84-cdc6c7b6097a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.320654 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r" (OuterVolumeSpecName: "kube-api-access-b895r") pod "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" (UID: "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd"). InnerVolumeSpecName "kube-api-access-b895r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.320729 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt" (OuterVolumeSpecName: "kube-api-access-8xlrt") pod "20a160f3-ed61-481d-be84-cdc6c7b6097a" (UID: "20a160f3-ed61-481d-be84-cdc6c7b6097a"). InnerVolumeSpecName "kube-api-access-8xlrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.321246 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx" (OuterVolumeSpecName: "kube-api-access-fxlnx") pod "0010c53f-b0a4-44bd-9178-bbd2941973ff" (UID: "0010c53f-b0a4-44bd-9178-bbd2941973ff"). InnerVolumeSpecName "kube-api-access-fxlnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418142 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418428 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418509 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418587 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418692 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418789 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.695411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerDied","Data":"d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc"} Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.695459 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.695576 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.701434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerDied","Data":"9e8472bdcdaadebac64e1b2f96d0af2b368fb0879ca9fc61ccf0b4fcdb472597"} Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.701542 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8472bdcdaadebac64e1b2f96d0af2b368fb0879ca9fc61ccf0b4fcdb472597" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.701444 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.709825 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerDied","Data":"5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642"} Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.709898 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.709901 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.646897 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647270 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647287 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647298 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647307 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647319 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647326 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647342 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647351 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647380 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647387 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647403 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647410 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647559 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647568 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647578 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647588 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647602 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647612 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.648455 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.655472 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5d7d2" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.655722 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.655916 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.668172 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742348 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742395 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742536 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742644 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844019 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844098 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844187 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844243 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.847703 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.847703 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.848226 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.883460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.969730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.482669 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.731835 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerStarted","Data":"c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e"} Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.731897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerStarted","Data":"11dcf61649e55b780fcd94737fb264550a32846241b86a9cd7be47a7c418a6f6"} Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.761069 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" podStartSLOduration=1.760995624 podStartE2EDuration="1.760995624s" podCreationTimestamp="2026-01-30 23:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:57.760724267 +0000 UTC m=+5633.721971300" watchObservedRunningTime="2026-01-30 23:13:57.760995624 +0000 UTC m=+5633.722242697" Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.040228 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.040741 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.805245 4979 generic.go:334] "Generic (PLEG): container finished" podID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerID="c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e" exitCode=0 Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.805369 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerDied","Data":"c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e"} Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.220144 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.284540 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.284771 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.284972 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.285125 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.293662 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf" (OuterVolumeSpecName: "kube-api-access-5ntxf") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "kube-api-access-5ntxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.297160 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts" (OuterVolumeSpecName: "scripts") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.319443 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.330742 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data" (OuterVolumeSpecName: "config-data") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388822 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388900 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388921 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388938 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.829538 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerDied","Data":"11dcf61649e55b780fcd94737fb264550a32846241b86a9cd7be47a7c418a6f6"} Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.830091 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11dcf61649e55b780fcd94737fb264550a32846241b86a9cd7be47a7c418a6f6" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.829670 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.945488 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:14:04 crc kubenswrapper[4979]: E0130 23:14:04.946308 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerName="nova-cell0-conductor-db-sync" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.946344 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerName="nova-cell0-conductor-db-sync" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.947756 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerName="nova-cell0-conductor-db-sync" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.948790 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.951977 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.957954 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5d7d2" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.970083 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.107510 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.107769 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.107832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.209960 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.210216 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.210500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.223112 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.225281 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.228495 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.250413 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.276651 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5d7d2" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.284146 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.889163 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.850074 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerStarted","Data":"b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a"} Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.850492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerStarted","Data":"291809dc6734d5a9dd972c012cc5bf6b3603448d28e739ac608b8b509bef5d72"} Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.850889 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.877348 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.877328451 podStartE2EDuration="2.877328451s" podCreationTimestamp="2026-01-30 23:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:06.872471771 +0000 UTC m=+5642.833718814" watchObservedRunningTime="2026-01-30 23:14:06.877328451 +0000 UTC m=+5642.838575484" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.310101 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.752783 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.754785 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.756759 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.758159 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.776490 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814148 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814223 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814356 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915781 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915861 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915927 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.922720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.923000 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.923574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.926482 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.927548 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.929435 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.939108 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.947673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.965128 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.966631 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.968650 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.013223 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018198 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018229 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018287 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018345 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018384 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.060495 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.062090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.064817 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.076591 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.090527 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124518 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124798 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124884 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125104 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125220 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.126572 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.136778 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.138176 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.143312 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.148871 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.159705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.166480 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.168894 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.174542 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.176082 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.179576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.179919 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.184582 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.190347 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227062 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227119 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227146 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227213 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227230 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227247 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227324 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227344 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227386 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227409 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.232384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.236356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.253607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329678 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329724 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329746 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329802 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329820 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329874 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329919 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329934 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331967 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331973 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331994 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.335343 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.336481 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.341363 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.343937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.347780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.347977 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.362064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.382002 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.561902 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.583667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.621785 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:14:16 crc kubenswrapper[4979]: W0130 23:14:16.635819 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39641496_4ab5_48e9_98bf_5627a0a79411.slice/crio-d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8 WatchSource:0}: Error finding container d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8: Status 404 returned error can't find the container with id d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8 Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.818385 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.825916 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: W0130 23:14:16.826107 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f10fb19_9eb0_41eb_ba70_763c84417475.slice/crio-5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48 WatchSource:0}: Error finding container 5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48: Status 404 returned error can't find the container with id 5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48 Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.915354 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.917262 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.919547 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.920714 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.928240 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944773 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944988 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.971883 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.975572 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerStarted","Data":"f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126"} Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.975612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerStarted","Data":"d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8"} Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.977684 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerStarted","Data":"60509f738bbb30a36ecad997927d99d02ba1be3a0cb973cc7510d23115c3b2cc"} Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.978654 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerStarted","Data":"5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48"} Jan 30 23:14:16 crc kubenswrapper[4979]: W0130 23:14:16.980283 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc44f117_1f6a_4e61_8725_a4740971f42d.slice/crio-286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8 WatchSource:0}: Error finding container 286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8: Status 404 returned error can't find the container with id 286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8 Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.043650 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ggn6b" podStartSLOduration=2.043633455 podStartE2EDuration="2.043633455s" podCreationTimestamp="2026-01-30 23:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:17.000773022 +0000 UTC m=+5652.962020065" watchObservedRunningTime="2026-01-30 23:14:17.043633455 +0000 UTC m=+5653.004880488" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047759 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047863 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.048768 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.054913 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.054976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.066690 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.072249 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.121139 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.292619 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.830697 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.073168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerStarted","Data":"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.073527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerStarted","Data":"286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.089269 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerStarted","Data":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.089329 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerStarted","Data":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.095648 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.095633934 podStartE2EDuration="2.095633934s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.094639518 +0000 UTC m=+5654.055886551" watchObservedRunningTime="2026-01-30 23:14:18.095633934 +0000 UTC m=+5654.056880967" Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.104480 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerStarted","Data":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.104528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerStarted","Data":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.104539 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerStarted","Data":"b55158532a2f564ce450009ecb5b15953c06c8b9352e105f92880542b2da972c"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.107373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerStarted","Data":"83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.113899 4979 generic.go:334] "Generic (PLEG): container finished" podID="065e25fc-286f-4759-9430-a918818caeae" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" exitCode=0 Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.113967 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerDied","Data":"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.113993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerStarted","Data":"ebd3dade926a167983d467980b49120885f0e096bd8d71d96bc62f48fd9a4976"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.118221 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerStarted","Data":"289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.132583 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.132564317 podStartE2EDuration="3.132564317s" podCreationTimestamp="2026-01-30 23:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.12301428 +0000 UTC m=+5654.084261313" watchObservedRunningTime="2026-01-30 23:14:18.132564317 +0000 UTC m=+5654.093811340" Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.153841 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.153817669 podStartE2EDuration="2.153817669s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.145118905 +0000 UTC m=+5654.106365948" watchObservedRunningTime="2026-01-30 23:14:18.153817669 +0000 UTC m=+5654.115064702" Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.192209 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.19219023 podStartE2EDuration="3.19219023s" podCreationTimestamp="2026-01-30 23:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.181618117 +0000 UTC m=+5654.142865150" watchObservedRunningTime="2026-01-30 23:14:18.19219023 +0000 UTC m=+5654.153437263" Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.132216 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerStarted","Data":"658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf"} Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.137827 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerStarted","Data":"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75"} Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.139177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.174710 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" podStartSLOduration=3.174687721 podStartE2EDuration="3.174687721s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:19.163367457 +0000 UTC m=+5655.124614520" watchObservedRunningTime="2026-01-30 23:14:19.174687721 +0000 UTC m=+5655.135934754" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.156488 4979 generic.go:334] "Generic (PLEG): container finished" podID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerID="658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf" exitCode=0 Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.156563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerDied","Data":"658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf"} Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.178274 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" podStartSLOduration=5.178255699 podStartE2EDuration="5.178255699s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:19.196846557 +0000 UTC m=+5655.158093590" watchObservedRunningTime="2026-01-30 23:14:21.178255699 +0000 UTC m=+5657.139502732" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.341925 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.363122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.363185 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.382574 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.174498 4979 generic.go:334] "Generic (PLEG): container finished" podID="39641496-4ab5-48e9-98bf-5627a0a79411" containerID="f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126" exitCode=0 Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.174571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerDied","Data":"f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126"} Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.629839 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776616 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776744 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776780 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776828 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.791670 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d" (OuterVolumeSpecName: "kube-api-access-tzm7d") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "kube-api-access-tzm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.791828 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts" (OuterVolumeSpecName: "scripts") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.814763 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.817937 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data" (OuterVolumeSpecName: "config-data") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878689 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878730 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878743 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878756 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: E0130 23:14:23.171455 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode541a45b_949e_42d3_bbbd_b7fcf76ae045.slice/crio-83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b\": RecentStats: unable to find data in memory cache]" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.189301 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerDied","Data":"83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b"} Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.189331 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.189354 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.622298 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.757223 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:14:23 crc kubenswrapper[4979]: E0130 23:14:23.758324 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" containerName="nova-manage" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758355 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" containerName="nova-manage" Jan 30 23:14:23 crc kubenswrapper[4979]: E0130 23:14:23.758378 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerName="nova-cell1-conductor-db-sync" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758385 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerName="nova-cell1-conductor-db-sync" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758555 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerName="nova-cell1-conductor-db-sync" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758572 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" containerName="nova-manage" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.759650 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.762238 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.768589 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799100 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799222 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799307 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.804829 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw" (OuterVolumeSpecName: "kube-api-access-gbdvw") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "kube-api-access-gbdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.806010 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts" (OuterVolumeSpecName: "scripts") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.822876 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data" (OuterVolumeSpecName: "config-data") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.823213 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.901915 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.902367 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903087 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903289 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903312 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903324 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903334 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.004844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.004920 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.004957 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.008611 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.011569 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.020668 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.076745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.243295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerDied","Data":"d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8"} Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.243347 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.243434 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.408682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.409279 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" containerID="cri-o://7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.409310 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" containerID="cri-o://efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.430391 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.430646 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" containerID="cri-o://0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.439451 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.439672 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" containerID="cri-o://1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.439763 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" containerID="cri-o://65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.570138 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.025914 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.039834 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141080 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141151 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141191 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141281 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141354 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141378 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141403 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141426 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141564 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs" (OuterVolumeSpecName: "logs") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141807 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.142276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs" (OuterVolumeSpecName: "logs") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.147247 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l" (OuterVolumeSpecName: "kube-api-access-m645l") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "kube-api-access-m645l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.148386 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds" (OuterVolumeSpecName: "kube-api-access-66wds") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "kube-api-access-66wds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.165887 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.166651 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data" (OuterVolumeSpecName: "config-data") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.169360 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data" (OuterVolumeSpecName: "config-data") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.171870 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243871 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243914 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243927 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243940 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243960 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243975 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243993 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.253202 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerStarted","Data":"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.253242 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerStarted","Data":"dbd9dee23baab194c4b7ba7a0c9558a9771dc7905ed62cf49005905c307d1f4a"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254663 4979 generic.go:334] "Generic (PLEG): container finished" podID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" exitCode=0 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254691 4979 generic.go:334] "Generic (PLEG): container finished" podID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" exitCode=143 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254718 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254748 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerDied","Data":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerDied","Data":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254798 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerDied","Data":"60509f738bbb30a36ecad997927d99d02ba1be3a0cb973cc7510d23115c3b2cc"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254814 4979 scope.go:117] "RemoveContainer" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263674 4979 generic.go:334] "Generic (PLEG): container finished" podID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" exitCode=0 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263816 4979 generic.go:334] "Generic (PLEG): container finished" podID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" exitCode=143 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263814 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerDied","Data":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263870 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerDied","Data":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263882 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerDied","Data":"b55158532a2f564ce450009ecb5b15953c06c8b9352e105f92880542b2da972c"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263793 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.284493 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.284477719 podStartE2EDuration="2.284477719s" podCreationTimestamp="2026-01-30 23:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:25.278323473 +0000 UTC m=+5661.239570526" watchObservedRunningTime="2026-01-30 23:14:25.284477719 +0000 UTC m=+5661.245724752" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.296145 4979 scope.go:117] "RemoveContainer" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.309848 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.323369 4979 scope.go:117] "RemoveContainer" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.324002 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": container with ID starting with 65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70 not found: ID does not exist" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324108 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} err="failed to get container status \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": rpc error: code = NotFound desc = could not find container \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": container with ID starting with 65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324149 4979 scope.go:117] "RemoveContainer" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.324519 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": container with ID starting with 1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8 not found: ID does not exist" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324555 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} err="failed to get container status \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": rpc error: code = NotFound desc = could not find container \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": container with ID starting with 1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324577 4979 scope.go:117] "RemoveContainer" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324845 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} err="failed to get container status \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": rpc error: code = NotFound desc = could not find container \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": container with ID starting with 65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.326559 4979 scope.go:117] "RemoveContainer" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.328526 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} err="failed to get container status \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": rpc error: code = NotFound desc = could not find container \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": container with ID starting with 1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.328578 4979 scope.go:117] "RemoveContainer" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.333272 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.342772 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343175 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343189 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343205 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343211 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343227 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343234 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343270 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343276 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343442 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343456 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343487 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343500 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.344437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.347728 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.354737 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.364685 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.372150 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.379168 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.380630 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.385524 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.401288 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.423898 4979 scope.go:117] "RemoveContainer" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446862 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446955 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446974 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.451805 4979 scope.go:117] "RemoveContainer" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.452191 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": container with ID starting with efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740 not found: ID does not exist" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.452242 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} err="failed to get container status \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": rpc error: code = NotFound desc = could not find container \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": container with ID starting with efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.452270 4979 scope.go:117] "RemoveContainer" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.454409 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": container with ID starting with 7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129 not found: ID does not exist" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454436 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} err="failed to get container status \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": rpc error: code = NotFound desc = could not find container \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": container with ID starting with 7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454455 4979 scope.go:117] "RemoveContainer" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454651 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} err="failed to get container status \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": rpc error: code = NotFound desc = could not find container \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": container with ID starting with efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454673 4979 scope.go:117] "RemoveContainer" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454833 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} err="failed to get container status \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": rpc error: code = NotFound desc = could not find container \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": container with ID starting with 7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548362 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548417 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548461 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548512 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548536 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548557 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548600 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548625 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.549086 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.554144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.554866 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.565265 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.651136 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.653565 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.653950 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.666595 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.713330 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.724976 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.159083 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.249407 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:26 crc kubenswrapper[4979]: W0130 23:14:26.265172 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85c5102_a753_4ad3_9105_8d3071189381.slice/crio-6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e WatchSource:0}: Error finding container 6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e: Status 404 returned error can't find the container with id 6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.274499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerStarted","Data":"49b78e9a8a12dce88ac01cd85c6a5a960d97ca2bea386aff319c4f3edf124c27"} Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.277081 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.343102 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.356068 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.565059 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.645825 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.646206 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" containerID="cri-o://6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" gracePeriod=10 Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.847763 4979 scope.go:117] "RemoveContainer" containerID="b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.084273 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" path="/var/lib/kubelet/pods/8cdb73b7-0e45-491f-b17f-a867667c059f/volumes" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.085361 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" path="/var/lib/kubelet/pods/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8/volumes" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.123984 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285825 4979 generic.go:334] "Generic (PLEG): container finished" podID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" exitCode=0 Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285904 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerDied","Data":"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285936 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerDied","Data":"cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285954 4979 scope.go:117] "RemoveContainer" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285956 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287190 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287236 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287442 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287534 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.293274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerStarted","Data":"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.293314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerStarted","Data":"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.293326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerStarted","Data":"6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.303159 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7" (OuterVolumeSpecName: "kube-api-access-7dmf7") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "kube-api-access-7dmf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.303612 4979 scope.go:117] "RemoveContainer" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.305828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerStarted","Data":"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.305859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerStarted","Data":"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.317422 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.322223 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.322204045 podStartE2EDuration="2.322204045s" podCreationTimestamp="2026-01-30 23:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:27.3101325 +0000 UTC m=+5663.271379533" watchObservedRunningTime="2026-01-30 23:14:27.322204045 +0000 UTC m=+5663.283451088" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.327747 4979 scope.go:117] "RemoveContainer" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" Jan 30 23:14:27 crc kubenswrapper[4979]: E0130 23:14:27.330316 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed\": container with ID starting with 6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed not found: ID does not exist" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.330371 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed"} err="failed to get container status \"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed\": rpc error: code = NotFound desc = could not find container \"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed\": container with ID starting with 6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed not found: ID does not exist" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.330400 4979 scope.go:117] "RemoveContainer" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" Jan 30 23:14:27 crc kubenswrapper[4979]: E0130 23:14:27.331087 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e\": container with ID starting with e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e not found: ID does not exist" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.331143 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e"} err="failed to get container status \"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e\": rpc error: code = NotFound desc = could not find container \"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e\": container with ID starting with e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e not found: ID does not exist" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.340676 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.340631461 podStartE2EDuration="2.340631461s" podCreationTimestamp="2026-01-30 23:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:27.329498991 +0000 UTC m=+5663.290746024" watchObservedRunningTime="2026-01-30 23:14:27.340631461 +0000 UTC m=+5663.301878494" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.343837 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.364694 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.370146 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config" (OuterVolumeSpecName: "config") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.390836 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393090 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393127 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393136 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393150 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393162 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.621653 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.633519 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.079467 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" path="/var/lib/kubelet/pods/3c13ddd7-ca9f-4446-a482-09cf5b71ced0/volumes" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.101608 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.145317 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.227516 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"bc44f117-1f6a-4e61-8725-a4740971f42d\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.227858 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"bc44f117-1f6a-4e61-8725-a4740971f42d\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.228053 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"bc44f117-1f6a-4e61-8725-a4740971f42d\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.233102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht" (OuterVolumeSpecName: "kube-api-access-crnht") pod "bc44f117-1f6a-4e61-8725-a4740971f42d" (UID: "bc44f117-1f6a-4e61-8725-a4740971f42d"). InnerVolumeSpecName "kube-api-access-crnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.253308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data" (OuterVolumeSpecName: "config-data") pod "bc44f117-1f6a-4e61-8725-a4740971f42d" (UID: "bc44f117-1f6a-4e61-8725-a4740971f42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.253796 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc44f117-1f6a-4e61-8725-a4740971f42d" (UID: "bc44f117-1f6a-4e61-8725-a4740971f42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322399 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" exitCode=0 Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322445 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerDied","Data":"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e"} Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322469 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerDied","Data":"286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8"} Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322485 4979 scope.go:117] "RemoveContainer" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322590 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.331383 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.331622 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.331774 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.381440 4979 scope.go:117] "RemoveContainer" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.381989 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e\": container with ID starting with 0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e not found: ID does not exist" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.382021 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e"} err="failed to get container status \"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e\": rpc error: code = NotFound desc = could not find container \"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e\": container with ID starting with 0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e not found: ID does not exist" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.385721 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.397335 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.404848 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.405328 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405344 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.405367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="init" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405376 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="init" Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.405393 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405401 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405646 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405658 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.406428 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.408525 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.414099 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.535171 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.535246 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.535267 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.604388 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.606064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.608981 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.609163 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.618492 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.637149 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.637213 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.637235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.640540 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.642889 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.664186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.725713 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739368 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739453 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739596 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739985 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841250 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841313 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841358 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.846713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.848828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.849340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.859726 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.922703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.218677 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:30 crc kubenswrapper[4979]: W0130 23:14:30.221765 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c9f1d3_a870_4b2f_bc60_e2a13d520664.slice/crio-24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555 WatchSource:0}: Error finding container 24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555: Status 404 returned error can't find the container with id 24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555 Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.339748 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerStarted","Data":"24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555"} Jan 30 23:14:30 crc kubenswrapper[4979]: W0130 23:14:30.368363 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c7f950_be1a_4557_8548_d41ac49e8010.slice/crio-66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51 WatchSource:0}: Error finding container 66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51: Status 404 returned error can't find the container with id 66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51 Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.368824 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.713883 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.714235 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.088407 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" path="/var/lib/kubelet/pods/bc44f117-1f6a-4e61-8725-a4740971f42d/volumes" Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.351356 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerStarted","Data":"2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a"} Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.351418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerStarted","Data":"66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51"} Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.353697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerStarted","Data":"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74"} Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.380779 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jzkql" podStartSLOduration=2.380756984 podStartE2EDuration="2.380756984s" podCreationTimestamp="2026-01-30 23:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:31.375505503 +0000 UTC m=+5667.336752556" watchObservedRunningTime="2026-01-30 23:14:31.380756984 +0000 UTC m=+5667.342004027" Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.412648 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.412625811 podStartE2EDuration="2.412625811s" podCreationTimestamp="2026-01-30 23:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:31.404095841 +0000 UTC m=+5667.365342894" watchObservedRunningTime="2026-01-30 23:14:31.412625811 +0000 UTC m=+5667.373872854" Jan 30 23:14:32 crc kubenswrapper[4979]: I0130 23:14:32.039562 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:14:32 crc kubenswrapper[4979]: I0130 23:14:32.039903 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:14:34 crc kubenswrapper[4979]: I0130 23:14:34.726827 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.394712 4979 generic.go:334] "Generic (PLEG): container finished" podID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerID="2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a" exitCode=0 Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.394772 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerDied","Data":"2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a"} Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.714501 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.714549 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.726326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.726367 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.788662 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881207 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.71:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881530 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.71:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881594 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.72:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881475 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.72:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911505 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911662 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911739 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.916299 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t" (OuterVolumeSpecName: "kube-api-access-lhw5t") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "kube-api-access-lhw5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.916529 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts" (OuterVolumeSpecName: "scripts") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.943279 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.957191 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data" (OuterVolumeSpecName: "config-data") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016151 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016181 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016191 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016200 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.411413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerDied","Data":"66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51"} Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.411452 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.411469 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.507195 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.507794 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" containerID="cri-o://c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.516170 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.516373 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" containerID="cri-o://49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.516506 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" containerID="cri-o://65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.530682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.531934 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" containerID="cri-o://15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.532251 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" containerID="cri-o://296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" gracePeriod=30 Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.420588 4979 generic.go:334] "Generic (PLEG): container finished" podID="e85c5102-a753-4ad3-9105-8d3071189381" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" exitCode=143 Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.420666 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerDied","Data":"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6"} Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.422672 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" exitCode=143 Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.422707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerDied","Data":"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77"} Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.296018 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.405986 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.406402 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.406427 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.406482 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.407086 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs" (OuterVolumeSpecName: "logs") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.425855 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc" (OuterVolumeSpecName: "kube-api-access-k5sdc") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "kube-api-access-k5sdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.430449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.435731 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data" (OuterVolumeSpecName: "config-data") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462367 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" exitCode=0 Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462420 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerDied","Data":"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b"} Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerDied","Data":"49b78e9a8a12dce88ac01cd85c6a5a960d97ca2bea386aff319c4f3edf124c27"} Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462478 4979 scope.go:117] "RemoveContainer" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462615 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509116 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509164 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509175 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509184 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.573388 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.582525 4979 scope.go:117] "RemoveContainer" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.593634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603280 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.603777 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603798 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.603812 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerName="nova-manage" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603818 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerName="nova-manage" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.603840 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603847 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.604095 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerName="nova-manage" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.604133 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.604143 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.606928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.612586 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.613228 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.621806 4979 scope.go:117] "RemoveContainer" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.624821 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b\": container with ID starting with 296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b not found: ID does not exist" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.624855 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b"} err="failed to get container status \"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b\": rpc error: code = NotFound desc = could not find container \"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b\": container with ID starting with 296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b not found: ID does not exist" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.624880 4979 scope.go:117] "RemoveContainer" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.625304 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77\": container with ID starting with 15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77 not found: ID does not exist" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.625333 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77"} err="failed to get container status \"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77\": rpc error: code = NotFound desc = could not find container \"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77\": container with ID starting with 15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77 not found: ID does not exist" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.716420 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.716741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.717006 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.717099 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.785236 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818660 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.819577 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.826524 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.826612 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.836293 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.919380 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.919517 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.919612 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.923511 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts" (OuterVolumeSpecName: "kube-api-access-6fzts") pod "41c9f1d3-a870-4b2f-bc60-e2a13d520664" (UID: "41c9f1d3-a870-4b2f-bc60-e2a13d520664"). InnerVolumeSpecName "kube-api-access-6fzts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.931060 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.953213 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c9f1d3-a870-4b2f-bc60-e2a13d520664" (UID: "41c9f1d3-a870-4b2f-bc60-e2a13d520664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.963617 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data" (OuterVolumeSpecName: "config-data") pod "41c9f1d3-a870-4b2f-bc60-e2a13d520664" (UID: "41c9f1d3-a870-4b2f-bc60-e2a13d520664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.024741 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.025043 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.025057 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.301264 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.430760 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431095 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431164 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431200 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs" (OuterVolumeSpecName: "logs") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.435331 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x" (OuterVolumeSpecName: "kube-api-access-pt98x") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "kube-api-access-pt98x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.461233 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data" (OuterVolumeSpecName: "config-data") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.470501 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.482330 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483339 4979 generic.go:334] "Generic (PLEG): container finished" podID="e85c5102-a753-4ad3-9105-8d3071189381" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" exitCode=0 Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483432 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerDied","Data":"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483479 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerDied","Data":"6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483508 4979 scope.go:117] "RemoveContainer" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483654 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501208 4979 generic.go:334] "Generic (PLEG): container finished" podID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" exitCode=0 Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerDied","Data":"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501280 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerDied","Data":"24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501329 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.531010 4979 scope.go:117] "RemoveContainer" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533096 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533125 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533140 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533154 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.537151 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.555025 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.560380 4979 scope.go:117] "RemoveContainer" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.565994 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283\": container with ID starting with 65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283 not found: ID does not exist" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.566054 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283"} err="failed to get container status \"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283\": rpc error: code = NotFound desc = could not find container \"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283\": container with ID starting with 65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283 not found: ID does not exist" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.566078 4979 scope.go:117] "RemoveContainer" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.568712 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6\": container with ID starting with 49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6 not found: ID does not exist" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.568754 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6"} err="failed to get container status \"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6\": rpc error: code = NotFound desc = could not find container \"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6\": container with ID starting with 49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6 not found: ID does not exist" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.568779 4979 scope.go:117] "RemoveContainer" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.582182 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.592696 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.593134 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593151 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.593165 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593176 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.593192 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593199 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593376 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593394 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593404 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.594380 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.596082 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.621115 4979 scope.go:117] "RemoveContainer" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.621550 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74\": container with ID starting with c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74 not found: ID does not exist" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.621589 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74"} err="failed to get container status \"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74\": rpc error: code = NotFound desc = could not find container \"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74\": container with ID starting with c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74 not found: ID does not exist" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.623073 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.632672 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.640171 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.641826 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.643723 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.647742 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738705 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738827 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738874 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738936 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.739008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.739069 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.840853 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.840915 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.840984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841060 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841098 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841116 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841139 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841670 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.847825 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.847825 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.847918 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.849572 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.862713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.862883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.930391 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.961589 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.097429 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" path="/var/lib/kubelet/pods/41c9f1d3-a870-4b2f-bc60-e2a13d520664/volumes" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.098157 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85c5102-a753-4ad3-9105-8d3071189381" path="/var/lib/kubelet/pods/e85c5102-a753-4ad3-9105-8d3071189381/volumes" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.098701 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" path="/var/lib/kubelet/pods/ecb82300-1ad1-4a3e-aba6-3635e79512a7/volumes" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.441945 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:43 crc kubenswrapper[4979]: W0130 23:14:43.441974 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a89116a_a8b5_4bf6_8e13_ec81f6b7a8c6.slice/crio-fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229 WatchSource:0}: Error finding container fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229: Status 404 returned error can't find the container with id fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229 Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.623426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerStarted","Data":"fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.630477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerStarted","Data":"679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.630565 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerStarted","Data":"7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.630580 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerStarted","Data":"9cc4bcac87294ecc35a0ba1173d943f288303feac35baadc83a45c516f8e77dd"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.646173 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.641557 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerStarted","Data":"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.642340 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerStarted","Data":"97dcc53461a420d86c88ad2c9e5439b13ea4d32d8913b4f9be12a15f52d97f4b"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.645931 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerStarted","Data":"15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.645990 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerStarted","Data":"abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.665674 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.6656508089999997 podStartE2EDuration="3.665650809s" podCreationTimestamp="2026-01-30 23:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:43.658678921 +0000 UTC m=+5679.619925954" watchObservedRunningTime="2026-01-30 23:14:44.665650809 +0000 UTC m=+5680.626897842" Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.670433 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.670409346 podStartE2EDuration="2.670409346s" podCreationTimestamp="2026-01-30 23:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:44.658344713 +0000 UTC m=+5680.619591836" watchObservedRunningTime="2026-01-30 23:14:44.670409346 +0000 UTC m=+5680.631656379" Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.694284 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.694246927 podStartE2EDuration="2.694246927s" podCreationTimestamp="2026-01-30 23:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:44.690541358 +0000 UTC m=+5680.651788431" watchObservedRunningTime="2026-01-30 23:14:44.694246927 +0000 UTC m=+5680.655493960" Jan 30 23:14:46 crc kubenswrapper[4979]: I0130 23:14:46.932386 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:46 crc kubenswrapper[4979]: I0130 23:14:46.932662 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:47 crc kubenswrapper[4979]: I0130 23:14:47.961690 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:14:51 crc kubenswrapper[4979]: I0130 23:14:51.933765 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:51 crc kubenswrapper[4979]: I0130 23:14:51.934345 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.932827 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.932890 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.962364 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.973246 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.986681 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 23:14:53 crc kubenswrapper[4979]: I0130 23:14:53.014265 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:53 crc kubenswrapper[4979]: I0130 23:14:53.762023 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 23:14:54 crc kubenswrapper[4979]: I0130 23:14:54.015190 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:54 crc kubenswrapper[4979]: I0130 23:14:54.015212 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.151975 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj"] Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.153555 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.155299 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.155661 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.170677 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj"] Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.214472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.214556 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.214617 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.316049 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.316166 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.316235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.317337 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.323046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.335134 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.474280 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: W0130 23:15:00.933564 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654a24ec_64c8_42fb_8ec0_f4be5297d71b.slice/crio-03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e WatchSource:0}: Error finding container 03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e: Status 404 returned error can't find the container with id 03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.940229 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj"] Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.810987 4979 generic.go:334] "Generic (PLEG): container finished" podID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerID="eca7942dd84fb6210abbf472b1d3e584f769d90b602f1eb132be7480230768be" exitCode=0 Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.811111 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" event={"ID":"654a24ec-64c8-42fb-8ec0-f4be5297d71b","Type":"ContainerDied","Data":"eca7942dd84fb6210abbf472b1d3e584f769d90b602f1eb132be7480230768be"} Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.812209 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" event={"ID":"654a24ec-64c8-42fb-8ec0-f4be5297d71b","Type":"ContainerStarted","Data":"03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e"} Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.964743 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.965534 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047142 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047193 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047230 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047879 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047937 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" gracePeriod=600 Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.093471 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:15:02 crc kubenswrapper[4979]: E0130 23:15:02.181733 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.829367 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" exitCode=0 Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.829417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f"} Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.829733 4979 scope.go:117] "RemoveContainer" containerID="94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.830614 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:02 crc kubenswrapper[4979]: E0130 23:15:02.830948 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.834897 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.939118 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.939895 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.940154 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.944160 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.209162 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.289864 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.290002 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.290195 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.290912 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume" (OuterVolumeSpecName: "config-volume") pod "654a24ec-64c8-42fb-8ec0-f4be5297d71b" (UID: "654a24ec-64c8-42fb-8ec0-f4be5297d71b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.297239 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "654a24ec-64c8-42fb-8ec0-f4be5297d71b" (UID: "654a24ec-64c8-42fb-8ec0-f4be5297d71b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.302737 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86" (OuterVolumeSpecName: "kube-api-access-ctz86") pod "654a24ec-64c8-42fb-8ec0-f4be5297d71b" (UID: "654a24ec-64c8-42fb-8ec0-f4be5297d71b"). InnerVolumeSpecName "kube-api-access-ctz86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.392595 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.392631 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.392643 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.840668 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.840732 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" event={"ID":"654a24ec-64c8-42fb-8ec0-f4be5297d71b","Type":"ContainerDied","Data":"03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e"} Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.841792 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.844668 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.848437 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.050122 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:04 crc kubenswrapper[4979]: E0130 23:15:04.050517 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerName="collect-profiles" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.050538 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerName="collect-profiles" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.050711 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerName="collect-profiles" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.051637 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.101777 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.209876 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210068 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210131 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.286506 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.305449 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312367 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312417 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312440 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.313323 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.313829 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.314346 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.315115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.332071 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.381086 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.858779 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.876344 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerStarted","Data":"915cecbe9c6901c7d7d431835fc9e7decfca7d60cbb868415d35c96f21bde0b8"} Jan 30 23:15:05 crc kubenswrapper[4979]: I0130 23:15:05.078824 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" path="/var/lib/kubelet/pods/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03/volumes" Jan 30 23:15:05 crc kubenswrapper[4979]: I0130 23:15:05.885793 4979 generic.go:334] "Generic (PLEG): container finished" podID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerID="768d5b79edb701e759cd2a0fc62def57f1157b7fce4a7f7fc9d3ff38886f5e98" exitCode=0 Jan 30 23:15:05 crc kubenswrapper[4979]: I0130 23:15:05.885849 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerDied","Data":"768d5b79edb701e759cd2a0fc62def57f1157b7fce4a7f7fc9d3ff38886f5e98"} Jan 30 23:15:06 crc kubenswrapper[4979]: I0130 23:15:06.895494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerStarted","Data":"3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2"} Jan 30 23:15:06 crc kubenswrapper[4979]: I0130 23:15:06.895725 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:06 crc kubenswrapper[4979]: I0130 23:15:06.912165 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" podStartSLOduration=2.912148917 podStartE2EDuration="2.912148917s" podCreationTimestamp="2026-01-30 23:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:06.908684582 +0000 UTC m=+5702.869931615" watchObservedRunningTime="2026-01-30 23:15:06.912148917 +0000 UTC m=+5702.873395950" Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.382592 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.450248 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.450494 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" containerID="cri-o://22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" gracePeriod=10 Jan 30 23:15:14 crc kubenswrapper[4979]: E0130 23:15:14.642487 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065e25fc_286f_4759_9430_a918818caeae.slice/crio-conmon-22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065e25fc_286f_4759_9430_a918818caeae.slice/crio-22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.954497 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000552 4979 generic.go:334] "Generic (PLEG): container finished" podID="065e25fc-286f-4759-9430-a918818caeae" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" exitCode=0 Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerDied","Data":"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75"} Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000611 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000633 4979 scope.go:117] "RemoveContainer" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000623 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerDied","Data":"ebd3dade926a167983d467980b49120885f0e096bd8d71d96bc62f48fd9a4976"} Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020811 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020833 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020928 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.033079 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb" (OuterVolumeSpecName: "kube-api-access-skvhb") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "kube-api-access-skvhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.034904 4979 scope.go:117] "RemoveContainer" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.076703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.076765 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.078313 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:15 crc kubenswrapper[4979]: E0130 23:15:15.079003 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.085525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config" (OuterVolumeSpecName: "config") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.107709 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.119453 4979 scope.go:117] "RemoveContainer" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" Jan 30 23:15:15 crc kubenswrapper[4979]: E0130 23:15:15.122670 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75\": container with ID starting with 22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75 not found: ID does not exist" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.122711 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75"} err="failed to get container status \"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75\": rpc error: code = NotFound desc = could not find container \"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75\": container with ID starting with 22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75 not found: ID does not exist" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.122734 4979 scope.go:117] "RemoveContainer" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" Jan 30 23:15:15 crc kubenswrapper[4979]: E0130 23:15:15.122966 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701\": container with ID starting with fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701 not found: ID does not exist" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.122984 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701"} err="failed to get container status \"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701\": rpc error: code = NotFound desc = could not find container \"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701\": container with ID starting with fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701 not found: ID does not exist" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129886 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129925 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129938 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129950 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129962 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.330479 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.339567 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.046652 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:15:17 crc kubenswrapper[4979]: E0130 23:15:17.047127 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="init" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.047144 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="init" Jan 30 23:15:17 crc kubenswrapper[4979]: E0130 23:15:17.047158 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.047165 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.047392 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.048254 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.107594 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065e25fc-286f-4759-9430-a918818caeae" path="/var/lib/kubelet/pods/065e25fc-286f-4759-9430-a918818caeae/volumes" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.108424 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.151280 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.152665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.155651 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.165457 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.199867 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.200496 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.301981 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.302392 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.302415 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.302472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.303113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.321765 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.404010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.404222 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.404779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.412461 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.420771 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.470329 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:18 crc kubenswrapper[4979]: I0130 23:15:17.861809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:15:18 crc kubenswrapper[4979]: W0130 23:15:17.861988 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b7f12a_3ae2_43d3_83d8_ea5ac1439aed.slice/crio-6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c WatchSource:0}: Error finding container 6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c: Status 404 returned error can't find the container with id 6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c Jan 30 23:15:18 crc kubenswrapper[4979]: I0130 23:15:18.029086 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dfcbh" event={"ID":"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed","Type":"ContainerStarted","Data":"6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c"} Jan 30 23:15:18 crc kubenswrapper[4979]: I0130 23:15:18.679178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:15:18 crc kubenswrapper[4979]: W0130 23:15:18.682284 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9737fb48_932e_4216_a323_0fa11a0a136d.slice/crio-00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a WatchSource:0}: Error finding container 00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a: Status 404 returned error can't find the container with id 00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.039555 4979 generic.go:334] "Generic (PLEG): container finished" podID="9737fb48-932e-4216-a323-0fa11a0a136d" containerID="d7d84d9b6f642570ec9f0833c3f37b449071bcc3ab74fb1efbfc67cb25be27a7" exitCode=0 Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.039697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7719-account-create-update-h5jpn" event={"ID":"9737fb48-932e-4216-a323-0fa11a0a136d","Type":"ContainerDied","Data":"d7d84d9b6f642570ec9f0833c3f37b449071bcc3ab74fb1efbfc67cb25be27a7"} Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.040053 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7719-account-create-update-h5jpn" event={"ID":"9737fb48-932e-4216-a323-0fa11a0a136d","Type":"ContainerStarted","Data":"00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a"} Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.041774 4979 generic.go:334] "Generic (PLEG): container finished" podID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerID="b2aed671841955c62444becfeabff7ccb5bcd0fdccfa5d1f4e24c893f848c58c" exitCode=0 Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.041821 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dfcbh" event={"ID":"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed","Type":"ContainerDied","Data":"b2aed671841955c62444becfeabff7ccb5bcd0fdccfa5d1f4e24c893f848c58c"} Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.483901 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.491227 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.558531 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.558603 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.559595 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" (UID: "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.569471 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4" (OuterVolumeSpecName: "kube-api-access-cqdw4") pod "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" (UID: "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed"). InnerVolumeSpecName "kube-api-access-cqdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"9737fb48-932e-4216-a323-0fa11a0a136d\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660491 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"9737fb48-932e-4216-a323-0fa11a0a136d\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660844 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660861 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.663556 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9737fb48-932e-4216-a323-0fa11a0a136d" (UID: "9737fb48-932e-4216-a323-0fa11a0a136d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.667254 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698" (OuterVolumeSpecName: "kube-api-access-wr698") pod "9737fb48-932e-4216-a323-0fa11a0a136d" (UID: "9737fb48-932e-4216-a323-0fa11a0a136d"). InnerVolumeSpecName "kube-api-access-wr698". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.762966 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.763001 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.060698 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.060683 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7719-account-create-update-h5jpn" event={"ID":"9737fb48-932e-4216-a323-0fa11a0a136d","Type":"ContainerDied","Data":"00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a"} Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.060835 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.061934 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dfcbh" event={"ID":"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed","Type":"ContainerDied","Data":"6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c"} Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.061971 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.061985 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.548879 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:15:22 crc kubenswrapper[4979]: E0130 23:15:22.549605 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" containerName="mariadb-account-create-update" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549617 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" containerName="mariadb-account-create-update" Jan 30 23:15:22 crc kubenswrapper[4979]: E0130 23:15:22.549637 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerName="mariadb-database-create" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549643 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerName="mariadb-database-create" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549850 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerName="mariadb-database-create" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549869 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" containerName="mariadb-account-create-update" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.550500 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.559464 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.559581 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.559716 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.560309 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4jjsh" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698724 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698862 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698885 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698945 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800290 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800347 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800457 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800516 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800584 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.805637 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.811254 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.811696 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.812575 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.821341 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.882438 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:23 crc kubenswrapper[4979]: I0130 23:15:23.328692 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:15:24 crc kubenswrapper[4979]: I0130 23:15:24.094057 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerStarted","Data":"41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007"} Jan 30 23:15:24 crc kubenswrapper[4979]: I0130 23:15:24.094473 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerStarted","Data":"b9541c5802fcf035cfc55841001b2271cfaa6f01741f050b6430e48441fab1f9"} Jan 30 23:15:24 crc kubenswrapper[4979]: I0130 23:15:24.113143 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x8rfx" podStartSLOduration=2.113068847 podStartE2EDuration="2.113068847s" podCreationTimestamp="2026-01-30 23:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:24.112237564 +0000 UTC m=+5720.073484597" watchObservedRunningTime="2026-01-30 23:15:24.113068847 +0000 UTC m=+5720.074315880" Jan 30 23:15:26 crc kubenswrapper[4979]: I0130 23:15:26.069935 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:26 crc kubenswrapper[4979]: E0130 23:15:26.070996 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:26 crc kubenswrapper[4979]: I0130 23:15:26.976281 4979 scope.go:117] "RemoveContainer" containerID="06f1c39be4f79a10738471e24d46871dad22c8321fde40d1075b882f27317030" Jan 30 23:15:27 crc kubenswrapper[4979]: I0130 23:15:27.124331 4979 generic.go:334] "Generic (PLEG): container finished" podID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerID="41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007" exitCode=0 Jan 30 23:15:27 crc kubenswrapper[4979]: I0130 23:15:27.124373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerDied","Data":"41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007"} Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.483671 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606538 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606604 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606681 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606734 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.607488 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.613412 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.613760 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf" (OuterVolumeSpecName: "kube-api-access-vkqqf") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "kube-api-access-vkqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.626224 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts" (OuterVolumeSpecName: "scripts") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.635195 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.671132 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data" (OuterVolumeSpecName: "config-data") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708632 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708675 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708689 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708702 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708714 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708726 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.142170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerDied","Data":"b9541c5802fcf035cfc55841001b2271cfaa6f01741f050b6430e48441fab1f9"} Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.142203 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.142208 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9541c5802fcf035cfc55841001b2271cfaa6f01741f050b6430e48441fab1f9" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.498143 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689759d469-jqhxp"] Jan 30 23:15:29 crc kubenswrapper[4979]: E0130 23:15:29.498928 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerName="cinder-db-sync" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.498943 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerName="cinder-db-sync" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.499194 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerName="cinder-db-sync" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.500265 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.521072 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689759d469-jqhxp"] Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652635 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-sb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-nb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652828 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-config\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-dns-svc\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652958 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4h67\" (UniqueName: \"kubernetes.io/projected/d2693393-b0b5-4009-9c45-80d154fa756c-kube-api-access-c4h67\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.743274 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.744714 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.748501 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.748918 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.751275 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4jjsh" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.752550 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.753947 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754053 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-dns-svc\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754099 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4h67\" (UniqueName: \"kubernetes.io/projected/d2693393-b0b5-4009-9c45-80d154fa756c-kube-api-access-c4h67\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754154 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-sb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-nb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754248 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-config\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755139 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-config\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755165 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-nb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755457 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-dns-svc\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755657 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-sb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.785976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4h67\" (UniqueName: \"kubernetes.io/projected/d2693393-b0b5-4009-9c45-80d154fa756c-kube-api-access-c4h67\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.824511 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856265 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856327 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856365 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856889 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856927 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959775 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959804 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.960252 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.960311 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.960372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959625 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.963475 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.964383 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.970449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.972107 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.989519 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.995144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:30 crc kubenswrapper[4979]: I0130 23:15:30.064867 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:15:30 crc kubenswrapper[4979]: I0130 23:15:30.369178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689759d469-jqhxp"] Jan 30 23:15:30 crc kubenswrapper[4979]: I0130 23:15:30.524131 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:15:30 crc kubenswrapper[4979]: W0130 23:15:30.528167 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5c27922_b152_465f_b0fe_117e336c7ae0.slice/crio-5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1 WatchSource:0}: Error finding container 5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1: Status 404 returned error can't find the container with id 5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1 Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.182863 4979 generic.go:334] "Generic (PLEG): container finished" podID="d2693393-b0b5-4009-9c45-80d154fa756c" containerID="c074709a0faa2cb5220ed986496fe6c89e9146b920a08c2bb0db74a260346281" exitCode=0 Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.183263 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689759d469-jqhxp" event={"ID":"d2693393-b0b5-4009-9c45-80d154fa756c","Type":"ContainerDied","Data":"c074709a0faa2cb5220ed986496fe6c89e9146b920a08c2bb0db74a260346281"} Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.183299 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689759d469-jqhxp" event={"ID":"d2693393-b0b5-4009-9c45-80d154fa756c","Type":"ContainerStarted","Data":"9b7617c198745bcfa434cf6f8a128bc1ff779ad2c3e34b45f53e616e03af51f2"} Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.199715 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerStarted","Data":"436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec"} Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.199763 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerStarted","Data":"5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1"} Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.210612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689759d469-jqhxp" event={"ID":"d2693393-b0b5-4009-9c45-80d154fa756c","Type":"ContainerStarted","Data":"ca77b320765f5f31e48b17b88339066c574f39f9bd13db2b1a8524882f0f78e3"} Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.211593 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.212601 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerStarted","Data":"e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2"} Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.212801 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.234593 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689759d469-jqhxp" podStartSLOduration=3.234576662 podStartE2EDuration="3.234576662s" podCreationTimestamp="2026-01-30 23:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:32.227996545 +0000 UTC m=+5728.189243578" watchObservedRunningTime="2026-01-30 23:15:32.234576662 +0000 UTC m=+5728.195823685" Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.826281 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.857556 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.857538203 podStartE2EDuration="10.857538203s" podCreationTimestamp="2026-01-30 23:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:32.260588687 +0000 UTC m=+5728.221835720" watchObservedRunningTime="2026-01-30 23:15:39.857538203 +0000 UTC m=+5735.818785236" Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.889826 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.890083 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" containerID="cri-o://3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2" gracePeriod=10 Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.070009 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:40 crc kubenswrapper[4979]: E0130 23:15:40.070421 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.294008 4979 generic.go:334] "Generic (PLEG): container finished" podID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerID="3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2" exitCode=0 Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.294461 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerDied","Data":"3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2"} Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.377603 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.471856 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472239 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472395 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472734 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.492433 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8" (OuterVolumeSpecName: "kube-api-access-vjpk8") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "kube-api-access-vjpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.533112 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.542442 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.554582 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.558766 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config" (OuterVolumeSpecName: "config") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575601 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575650 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575662 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575673 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575685 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.319518 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerDied","Data":"915cecbe9c6901c7d7d431835fc9e7decfca7d60cbb868415d35c96f21bde0b8"} Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.319619 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.319803 4979 scope.go:117] "RemoveContainer" containerID="3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.351386 4979 scope.go:117] "RemoveContainer" containerID="768d5b79edb701e759cd2a0fc62def57f1157b7fce4a7f7fc9d3ff38886f5e98" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.369193 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.391695 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.680266 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.680499 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" containerID="cri-o://6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.690858 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.691124 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.703780 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.704060 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.711343 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.711579 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" containerID="cri-o://abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.711734 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" containerID="cri-o://15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.752538 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.752792 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" containerID="cri-o://7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.752896 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" containerID="cri-o://679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456" gracePeriod=30 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.173439 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.329584 4979 generic.go:334] "Generic (PLEG): container finished" podID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerID="289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3" exitCode=0 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.329958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerDied","Data":"289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3"} Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.331585 4979 generic.go:334] "Generic (PLEG): container finished" podID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerID="abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4" exitCode=143 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.331617 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerDied","Data":"abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4"} Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.333764 4979 generic.go:334] "Generic (PLEG): container finished" podID="18e5930e-5323-4957-8495-8ccec47fcec1" containerID="7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617" exitCode=143 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.333782 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerDied","Data":"7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617"} Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.612062 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.720378 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"0f10fb19-9eb0-41eb-ba70-763c84417475\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.720484 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"0f10fb19-9eb0-41eb-ba70-763c84417475\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.720625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"0f10fb19-9eb0-41eb-ba70-763c84417475\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.745496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm" (OuterVolumeSpecName: "kube-api-access-jcjfm") pod "0f10fb19-9eb0-41eb-ba70-763c84417475" (UID: "0f10fb19-9eb0-41eb-ba70-763c84417475"). InnerVolumeSpecName "kube-api-access-jcjfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.749225 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f10fb19-9eb0-41eb-ba70-763c84417475" (UID: "0f10fb19-9eb0-41eb-ba70-763c84417475"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.749446 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data" (OuterVolumeSpecName: "config-data") pod "0f10fb19-9eb0-41eb-ba70-763c84417475" (UID: "0f10fb19-9eb0-41eb-ba70-763c84417475"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.823289 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.823760 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.823772 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.964454 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.967464 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.969362 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.969435 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.079396 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" path="/var/lib/kubelet/pods/62a96508-72cd-4ec2-979e-e32ed0ee4aa0/volumes" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.344627 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerDied","Data":"5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48"} Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.344673 4979 scope.go:117] "RemoveContainer" containerID="289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.344767 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.380210 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.395764 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.417101 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: E0130 23:15:43.417685 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.417751 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" Jan 30 23:15:43 crc kubenswrapper[4979]: E0130 23:15:43.417819 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.417882 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 23:15:43 crc kubenswrapper[4979]: E0130 23:15:43.417957 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="init" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.418014 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="init" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.418315 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.418396 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.419100 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.421954 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.434618 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.537593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6x5h\" (UniqueName: \"kubernetes.io/projected/f517549b-f450-42f3-9445-6b45713a7328-kube-api-access-m6x5h\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.537862 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.537957 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.640673 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.641131 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.641261 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6x5h\" (UniqueName: \"kubernetes.io/projected/f517549b-f450-42f3-9445-6b45713a7328-kube-api-access-m6x5h\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.647264 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.651245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.668593 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6x5h\" (UniqueName: \"kubernetes.io/projected/f517549b-f450-42f3-9445-6b45713a7328-kube-api-access-m6x5h\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.738845 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.237434 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:44 crc kubenswrapper[4979]: W0130 23:15:44.246172 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf517549b_f450_42f3_9445_6b45713a7328.slice/crio-34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f WatchSource:0}: Error finding container 34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f: Status 404 returned error can't find the container with id 34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.355968 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f517549b-f450-42f3-9445-6b45713a7328","Type":"ContainerStarted","Data":"34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f"} Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.361179 4979 generic.go:334] "Generic (PLEG): container finished" podID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerID="b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a" exitCode=0 Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.361288 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerDied","Data":"b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a"} Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.375573 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.455074 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.455116 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.455215 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.461138 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj" (OuterVolumeSpecName: "kube-api-access-hvjxj") pod "bf4fb85a-b378-482c-92d5-34f7f4e99e23" (UID: "bf4fb85a-b378-482c-92d5-34f7f4e99e23"). InnerVolumeSpecName "kube-api-access-hvjxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.479461 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4fb85a-b378-482c-92d5-34f7f4e99e23" (UID: "bf4fb85a-b378-482c-92d5-34f7f4e99e23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.482122 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data" (OuterVolumeSpecName: "config-data") pod "bf4fb85a-b378-482c-92d5-34f7f4e99e23" (UID: "bf4fb85a-b378-482c-92d5-34f7f4e99e23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.557694 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.557743 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.557763 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.886709 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": read tcp 10.217.0.2:45808->10.217.1.75:8775: read: connection reset by peer" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.886786 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": read tcp 10.217.0.2:45804->10.217.1.75:8775: read: connection reset by peer" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.099433 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" path="/var/lib/kubelet/pods/0f10fb19-9eb0-41eb-ba70-763c84417475/volumes" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.294194 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": read tcp 10.217.0.2:42732->10.217.1.76:8774: read: connection reset by peer" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.294257 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": read tcp 10.217.0.2:42734->10.217.1.76:8774: read: connection reset by peer" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.374955 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.375349 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" containerID="cri-o://d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" gracePeriod=30 Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.390798 4979 generic.go:334] "Generic (PLEG): container finished" podID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerID="15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a" exitCode=0 Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.390883 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerDied","Data":"15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.398217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f517549b-f450-42f3-9445-6b45713a7328","Type":"ContainerStarted","Data":"a9bf50ac422fb82972a106094af6a40acd911fa9017c807b57bb5857511dc6b1"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423270 4979 generic.go:334] "Generic (PLEG): container finished" podID="18e5930e-5323-4957-8495-8ccec47fcec1" containerID="679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456" exitCode=0 Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerDied","Data":"679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423412 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerDied","Data":"9cc4bcac87294ecc35a0ba1173d943f288303feac35baadc83a45c516f8e77dd"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423423 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc4bcac87294ecc35a0ba1173d943f288303feac35baadc83a45c516f8e77dd" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.429422 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.42940412 podStartE2EDuration="2.42940412s" podCreationTimestamp="2026-01-30 23:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:45.424841457 +0000 UTC m=+5741.386088480" watchObservedRunningTime="2026-01-30 23:15:45.42940412 +0000 UTC m=+5741.390651153" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.441307 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerDied","Data":"291809dc6734d5a9dd972c012cc5bf6b3603448d28e739ac608b8b509bef5d72"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.441361 4979 scope.go:117] "RemoveContainer" containerID="b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.441544 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.443655 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4fb85a_b378_482c_92d5_34f7f4e99e23.slice/crio-291809dc6734d5a9dd972c012cc5bf6b3603448d28e739ac608b8b509bef5d72\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a89116a_a8b5_4bf6_8e13_ec81f6b7a8c6.slice/crio-15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4fb85a_b378_482c_92d5_34f7f4e99e23.slice\": RecentStats: unable to find data in memory cache]" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.465189 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.497095 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.510081 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528261 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.528616 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528646 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.528669 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528675 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.528695 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528702 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528870 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528893 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528908 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.529473 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.538485 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.541179 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.584854 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.584937 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.584978 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585015 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585352 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rvqh\" (UniqueName: \"kubernetes.io/projected/274c05f8-cb23-41d5-b911-5d13bac207a0-kube-api-access-7rvqh\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585407 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.586597 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs" (OuterVolumeSpecName: "logs") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.596801 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd" (OuterVolumeSpecName: "kube-api-access-gzlfd") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "kube-api-access-gzlfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.611256 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data" (OuterVolumeSpecName: "config-data") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.639700 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687238 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rvqh\" (UniqueName: \"kubernetes.io/projected/274c05f8-cb23-41d5-b911-5d13bac207a0-kube-api-access-7rvqh\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687434 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687448 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687458 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687466 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.691866 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.714696 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.719127 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rvqh\" (UniqueName: \"kubernetes.io/projected/274c05f8-cb23-41d5-b911-5d13bac207a0-kube-api-access-7rvqh\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.862657 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.929781 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.997813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.997919 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.997975 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.998049 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.998661 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs" (OuterVolumeSpecName: "logs") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.004520 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt" (OuterVolumeSpecName: "kube-api-access-hnbdt") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "kube-api-access-hnbdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.026122 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data" (OuterVolumeSpecName: "config-data") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.032527 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100163 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100223 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100240 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100252 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.370538 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.454286 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"274c05f8-cb23-41d5-b911-5d13bac207a0","Type":"ContainerStarted","Data":"fdf757bd46ff2870fd14951aace91217de546f0fe3d3fdb86963926f8a1ad039"} Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467007 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerDied","Data":"fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229"} Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467096 4979 scope.go:117] "RemoveContainer" containerID="15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467275 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467438 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.499906 4979 scope.go:117] "RemoveContainer" containerID="abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.546719 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.570101 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.590924 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: E0130 23:15:46.591334 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591355 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" Jan 30 23:15:46 crc kubenswrapper[4979]: E0130 23:15:46.591383 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591390 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591566 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591581 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.606125 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.611467 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.612677 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.625318 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.633606 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.649162 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.651083 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.654065 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.664201 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715473 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715512 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-config-data\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715548 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715569 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4fm\" (UniqueName: \"kubernetes.io/projected/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-kube-api-access-tb4fm\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1269d92-1612-453c-8e80-29981ced4aca-logs\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.716119 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-config-data\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.716171 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-logs\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.716239 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvjr\" (UniqueName: \"kubernetes.io/projected/a1269d92-1612-453c-8e80-29981ced4aca-kube-api-access-xhvjr\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-config-data\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818565 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4fm\" (UniqueName: \"kubernetes.io/projected/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-kube-api-access-tb4fm\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818622 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1269d92-1612-453c-8e80-29981ced4aca-logs\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818667 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-config-data\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818689 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-logs\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818716 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvjr\" (UniqueName: \"kubernetes.io/projected/a1269d92-1612-453c-8e80-29981ced4aca-kube-api-access-xhvjr\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.819253 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1269d92-1612-453c-8e80-29981ced4aca-logs\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.819387 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-logs\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.823228 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-config-data\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.823803 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-config-data\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.824642 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.835411 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4fm\" (UniqueName: \"kubernetes.io/projected/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-kube-api-access-tb4fm\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.840501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvjr\" (UniqueName: \"kubernetes.io/projected/a1269d92-1612-453c-8e80-29981ced4aca-kube-api-access-xhvjr\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.840982 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.933644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.006810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.082940 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" path="/var/lib/kubelet/pods/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6/volumes" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.083698 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" path="/var/lib/kubelet/pods/18e5930e-5323-4957-8495-8ccec47fcec1/volumes" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.084348 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" path="/var/lib/kubelet/pods/bf4fb85a-b378-482c-92d5-34f7f4e99e23/volumes" Jan 30 23:15:47 crc kubenswrapper[4979]: W0130 23:15:47.424688 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1269d92_1612_453c_8e80_29981ced4aca.slice/crio-e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80 WatchSource:0}: Error finding container e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80: Status 404 returned error can't find the container with id e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80 Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.427156 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.483228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1269d92-1612-453c-8e80-29981ced4aca","Type":"ContainerStarted","Data":"e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80"} Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.489434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"274c05f8-cb23-41d5-b911-5d13bac207a0","Type":"ContainerStarted","Data":"8c131e53c89ddc510dcc01d0c60f598338f0f65e94b54eed23677c89ebaca21b"} Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.489532 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.509054 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.512792 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.512771776 podStartE2EDuration="2.512771776s" podCreationTimestamp="2026-01-30 23:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:47.505270613 +0000 UTC m=+5743.466517646" watchObservedRunningTime="2026-01-30 23:15:47.512771776 +0000 UTC m=+5743.474018819" Jan 30 23:15:47 crc kubenswrapper[4979]: W0130 23:15:47.527832 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce01f4b_19ef_4c0b_ab4c_f76e96297fde.slice/crio-b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95 WatchSource:0}: Error finding container b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95: Status 404 returned error can't find the container with id b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95 Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.967054 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.972814 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.977284 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.977366 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.344660 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.451136 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"4b128be7-1d02-4fdc-aa5d-356001e694ce\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.451290 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"4b128be7-1d02-4fdc-aa5d-356001e694ce\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.451414 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"4b128be7-1d02-4fdc-aa5d-356001e694ce\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.457724 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68" (OuterVolumeSpecName: "kube-api-access-qld68") pod "4b128be7-1d02-4fdc-aa5d-356001e694ce" (UID: "4b128be7-1d02-4fdc-aa5d-356001e694ce"). InnerVolumeSpecName "kube-api-access-qld68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.477861 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b128be7-1d02-4fdc-aa5d-356001e694ce" (UID: "4b128be7-1d02-4fdc-aa5d-356001e694ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.485965 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data" (OuterVolumeSpecName: "config-data") pod "4b128be7-1d02-4fdc-aa5d-356001e694ce" (UID: "4b128be7-1d02-4fdc-aa5d-356001e694ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.506120 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde","Type":"ContainerStarted","Data":"8698d5566fb9becbea0cb5eb481ef65f2ac7dbbe013ea0c1399eb23ab4418ddf"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.506171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde","Type":"ContainerStarted","Data":"cbbc55d21c6c217dc75f5a688ab29119f5135fbb2c72463dfa4d37ebd13896bc"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.506186 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde","Type":"ContainerStarted","Data":"b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510497 4979 generic.go:334] "Generic (PLEG): container finished" podID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" exitCode=0 Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510558 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerDied","Data":"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510863 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerDied","Data":"dbd9dee23baab194c4b7ba7a0c9558a9771dc7905ed62cf49005905c307d1f4a"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510915 4979 scope.go:117] "RemoveContainer" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.514406 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1269d92-1612-453c-8e80-29981ced4aca","Type":"ContainerStarted","Data":"561e3f9f1d7870b9e791b62cb8972174bf7978d5d5f354a599b2e8b6ad4aee7d"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.514434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1269d92-1612-453c-8e80-29981ced4aca","Type":"ContainerStarted","Data":"2d9b5135bee00c293bcad10b1cf9ecdabb42cdc93b5a33efade4ba2531397afb"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.536854 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.536826257 podStartE2EDuration="2.536826257s" podCreationTimestamp="2026-01-30 23:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:48.524155193 +0000 UTC m=+5744.485402226" watchObservedRunningTime="2026-01-30 23:15:48.536826257 +0000 UTC m=+5744.498073290" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.554095 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.554130 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.554141 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.567970 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.56794478 podStartE2EDuration="2.56794478s" podCreationTimestamp="2026-01-30 23:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:48.546936581 +0000 UTC m=+5744.508183624" watchObservedRunningTime="2026-01-30 23:15:48.56794478 +0000 UTC m=+5744.529191813" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.584289 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.595597 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.602380 4979 scope.go:117] "RemoveContainer" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" Jan 30 23:15:48 crc kubenswrapper[4979]: E0130 23:15:48.602955 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d\": container with ID starting with d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d not found: ID does not exist" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.602989 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d"} err="failed to get container status \"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d\": rpc error: code = NotFound desc = could not find container \"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d\": container with ID starting with d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d not found: ID does not exist" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.606888 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: E0130 23:15:48.607463 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.607485 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.607665 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.608318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.615053 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.656682 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.656917 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.656970 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdk9d\" (UniqueName: \"kubernetes.io/projected/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-kube-api-access-bdk9d\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.670895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.739438 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.759498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.759936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.759999 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdk9d\" (UniqueName: \"kubernetes.io/projected/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-kube-api-access-bdk9d\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.763156 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.769351 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.782067 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdk9d\" (UniqueName: \"kubernetes.io/projected/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-kube-api-access-bdk9d\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.004113 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.081056 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" path="/var/lib/kubelet/pods/4b128be7-1d02-4fdc-aa5d-356001e694ce/volumes" Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.459958 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:49 crc kubenswrapper[4979]: W0130 23:15:49.465444 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab6e2f8_0934_41a0_b35e_0c6e0b5dacd1.slice/crio-450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d WatchSource:0}: Error finding container 450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d: Status 404 returned error can't find the container with id 450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.533379 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1","Type":"ContainerStarted","Data":"450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d"} Jan 30 23:15:50 crc kubenswrapper[4979]: I0130 23:15:50.548952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1","Type":"ContainerStarted","Data":"6424cf9673576dec58b8e9cb3967c89696fd289997cfe1a9b161158c866792fb"} Jan 30 23:15:50 crc kubenswrapper[4979]: I0130 23:15:50.549474 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:50 crc kubenswrapper[4979]: I0130 23:15:50.590365 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.590332344 podStartE2EDuration="2.590332344s" podCreationTimestamp="2026-01-30 23:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:50.580334584 +0000 UTC m=+5746.541581617" watchObservedRunningTime="2026-01-30 23:15:50.590332344 +0000 UTC m=+5746.551579417" Jan 30 23:15:51 crc kubenswrapper[4979]: I0130 23:15:51.934828 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:15:51 crc kubenswrapper[4979]: I0130 23:15:51.936947 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.070611 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:52 crc kubenswrapper[4979]: E0130 23:15:52.070913 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.109905 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.234449 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"f0607a76-8412-4547-945c-f5672e9516f8\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.234542 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"f0607a76-8412-4547-945c-f5672e9516f8\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.234664 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"f0607a76-8412-4547-945c-f5672e9516f8\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.243998 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4" (OuterVolumeSpecName: "kube-api-access-bm2d4") pod "f0607a76-8412-4547-945c-f5672e9516f8" (UID: "f0607a76-8412-4547-945c-f5672e9516f8"). InnerVolumeSpecName "kube-api-access-bm2d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.278278 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0607a76-8412-4547-945c-f5672e9516f8" (UID: "f0607a76-8412-4547-945c-f5672e9516f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.290509 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data" (OuterVolumeSpecName: "config-data") pod "f0607a76-8412-4547-945c-f5672e9516f8" (UID: "f0607a76-8412-4547-945c-f5672e9516f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.336528 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.336568 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.336580 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.575883 4979 generic.go:334] "Generic (PLEG): container finished" podID="f0607a76-8412-4547-945c-f5672e9516f8" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" exitCode=0 Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.575970 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerDied","Data":"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b"} Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.576044 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerDied","Data":"97dcc53461a420d86c88ad2c9e5439b13ea4d32d8913b4f9be12a15f52d97f4b"} Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.576067 4979 scope.go:117] "RemoveContainer" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.576058 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.634812 4979 scope.go:117] "RemoveContainer" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" Jan 30 23:15:52 crc kubenswrapper[4979]: E0130 23:15:52.636180 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b\": container with ID starting with 6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b not found: ID does not exist" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.636242 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b"} err="failed to get container status \"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b\": rpc error: code = NotFound desc = could not find container \"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b\": container with ID starting with 6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b not found: ID does not exist" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.639791 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.659602 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.682790 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: E0130 23:15:52.684408 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.684443 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.685241 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.693423 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.711395 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.719598 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.745061 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.745117 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-config-data\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.745159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74rx\" (UniqueName: \"kubernetes.io/projected/b6d75777-1cab-4bbc-ab03-361b03c488f4-kube-api-access-c74rx\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.846322 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.846380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-config-data\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.846422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c74rx\" (UniqueName: \"kubernetes.io/projected/b6d75777-1cab-4bbc-ab03-361b03c488f4-kube-api-access-c74rx\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.852044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.853986 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-config-data\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.868466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c74rx\" (UniqueName: \"kubernetes.io/projected/b6d75777-1cab-4bbc-ab03-361b03c488f4-kube-api-access-c74rx\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.032914 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.080004 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0607a76-8412-4547-945c-f5672e9516f8" path="/var/lib/kubelet/pods/f0607a76-8412-4547-945c-f5672e9516f8/volumes" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.531095 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:53 crc kubenswrapper[4979]: W0130 23:15:53.535688 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d75777_1cab_4bbc_ab03_361b03c488f4.slice/crio-438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c WatchSource:0}: Error finding container 438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c: Status 404 returned error can't find the container with id 438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.589069 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6d75777-1cab-4bbc-ab03-361b03c488f4","Type":"ContainerStarted","Data":"438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c"} Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.739828 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.752318 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.049344 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.598947 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6d75777-1cab-4bbc-ab03-361b03c488f4","Type":"ContainerStarted","Data":"928f87c3d45ff898d1d8dde5da3c9babebfd3c579208f6746eb5c698d0537949"} Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.615453 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.624227 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.624204669 podStartE2EDuration="2.624204669s" podCreationTimestamp="2026-01-30 23:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:54.622819162 +0000 UTC m=+5750.584066195" watchObservedRunningTime="2026-01-30 23:15:54.624204669 +0000 UTC m=+5750.585451702" Jan 30 23:15:55 crc kubenswrapper[4979]: I0130 23:15:55.898522 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:56 crc kubenswrapper[4979]: I0130 23:15:56.936626 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:15:56 crc kubenswrapper[4979]: I0130 23:15:56.936922 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:15:57 crc kubenswrapper[4979]: I0130 23:15:57.008662 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:15:57 crc kubenswrapper[4979]: I0130 23:15:57.008709 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.025656 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1269d92-1612-453c-8e80-29981ced4aca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.025979 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1269d92-1612-453c-8e80-29981ced4aca" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.033479 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.109372 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ce01f4b-19ef-4c0b-ab4c-f76e96297fde" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.110330 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ce01f4b-19ef-4c0b-ab4c-f76e96297fde" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.060644 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.062761 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.065572 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.097072 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.181946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182002 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182080 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182113 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182132 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283877 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283930 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283958 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.284019 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.284053 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.284657 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.292489 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.293211 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.303783 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.304652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.345922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.397834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.880975 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.140479 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.141589 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" containerID="cri-o://436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec" gracePeriod=30 Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.141760 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" containerID="cri-o://e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2" gracePeriod=30 Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.657148 4979 generic.go:334] "Generic (PLEG): container finished" podID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerID="436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec" exitCode=143 Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.657236 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerDied","Data":"436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec"} Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.659872 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerStarted","Data":"f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33"} Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.659908 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerStarted","Data":"4426c069aa3a6213267e35e8b2382f441791df8002dd43fd176594900d983cdf"} Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.918797 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.920282 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.922127 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.939272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021126 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021456 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021489 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021514 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021534 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-run\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021580 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvq7s\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-kube-api-access-wvq7s\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021607 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021652 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021671 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021704 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021724 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123453 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123523 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123538 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123557 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123585 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123605 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-run\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123635 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123650 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvq7s\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-kube-api-access-wvq7s\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123681 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123711 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123741 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123759 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123823 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123949 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.124879 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.124987 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-run\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125201 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125209 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125242 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125238 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125305 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125434 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.129817 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.129920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.129953 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.135578 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.145622 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.146082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvq7s\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-kube-api-access-wvq7s\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.298952 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.468228 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.491763 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.491890 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.495814 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535243 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535280 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535318 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535339 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7h6w\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-kube-api-access-m7h6w\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535360 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535423 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535446 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-ceph\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535469 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-dev\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535528 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535567 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535598 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535619 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-lib-modules\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-run\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-scripts\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535719 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-sys\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641184 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-scripts\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-sys\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641337 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-sys\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641408 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641522 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7h6w\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-kube-api-access-m7h6w\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641543 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641719 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641909 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-ceph\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642352 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-dev\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642444 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-lib-modules\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642445 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-dev\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642459 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-run\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642491 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643272 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-lib-modules\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-run\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643355 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643353 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.646183 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-scripts\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.646413 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.646528 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.647662 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-ceph\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.653084 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.657766 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7h6w\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-kube-api-access-m7h6w\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.679422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerStarted","Data":"11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce"} Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.706897 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.706874064 podStartE2EDuration="2.706874064s" podCreationTimestamp="2026-01-30 23:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:16:01.699893925 +0000 UTC m=+5757.661140958" watchObservedRunningTime="2026-01-30 23:16:01.706874064 +0000 UTC m=+5757.668121097" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.831524 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.886725 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.889153 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:16:02 crc kubenswrapper[4979]: I0130 23:16:02.418773 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 23:16:02 crc kubenswrapper[4979]: W0130 23:16:02.421516 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e02f71_2ffc_45bb_9344_28ff1640cffd.slice/crio-a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08 WatchSource:0}: Error finding container a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08: Status 404 returned error can't find the container with id a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08 Jan 30 23:16:02 crc kubenswrapper[4979]: I0130 23:16:02.690703 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c3e02f71-2ffc-45bb-9344-28ff1640cffd","Type":"ContainerStarted","Data":"a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08"} Jan 30 23:16:02 crc kubenswrapper[4979]: I0130 23:16:02.691957 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3aa75164-0d7b-4b9a-a21d-2c5834956114","Type":"ContainerStarted","Data":"589a37400598162d558e5fa543ddb7712657c0944f4c5b20620cd378be4d18d8"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.033474 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.063390 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.702583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3aa75164-0d7b-4b9a-a21d-2c5834956114","Type":"ContainerStarted","Data":"ca1c98fc9167e72ffec190cf39ce7fadd4aff01988d7f7a18950a4a96f293e92"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.703942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3aa75164-0d7b-4b9a-a21d-2c5834956114","Type":"ContainerStarted","Data":"0b4aab30be3d2ea4fccc4119bc2f102c24d2f8e4b1cc72c0baa72ac8cabe8b71"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.705001 4979 generic.go:334] "Generic (PLEG): container finished" podID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerID="e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2" exitCode=0 Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.705063 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerDied","Data":"e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.716706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c3e02f71-2ffc-45bb-9344-28ff1640cffd","Type":"ContainerStarted","Data":"f8ab7a6798449f8a857462d0197aa5edf748ba12b8453c7e3ce4db0501aa2adc"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.716888 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c3e02f71-2ffc-45bb-9344-28ff1640cffd","Type":"ContainerStarted","Data":"a3ba727f0fa2aa37641c80860539b01715d738587555b835e31222add036a961"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.734307 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.992716162 podStartE2EDuration="3.734284115s" podCreationTimestamp="2026-01-30 23:16:00 +0000 UTC" firstStartedPulling="2026-01-30 23:16:01.888869891 +0000 UTC m=+5757.850116924" lastFinishedPulling="2026-01-30 23:16:02.630437844 +0000 UTC m=+5758.591684877" observedRunningTime="2026-01-30 23:16:03.725775735 +0000 UTC m=+5759.687022768" watchObservedRunningTime="2026-01-30 23:16:03.734284115 +0000 UTC m=+5759.695531148" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.781477 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.785477 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.819058 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.125153244 podStartE2EDuration="2.819019498s" podCreationTimestamp="2026-01-30 23:16:01 +0000 UTC" firstStartedPulling="2026-01-30 23:16:02.424395086 +0000 UTC m=+5758.385642139" lastFinishedPulling="2026-01-30 23:16:03.11826136 +0000 UTC m=+5759.079508393" observedRunningTime="2026-01-30 23:16:03.762364765 +0000 UTC m=+5759.723611788" watchObservedRunningTime="2026-01-30 23:16:03.819019498 +0000 UTC m=+5759.780266531" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890667 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890708 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890879 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890927 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890964 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890980 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.891174 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.891895 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.892190 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs" (OuterVolumeSpecName: "logs") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.920837 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld" (OuterVolumeSpecName: "kube-api-access-9wnld") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "kube-api-access-9wnld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.934104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.963474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts" (OuterVolumeSpecName: "scripts") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993887 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993925 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993940 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993952 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.023200 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data" (OuterVolumeSpecName: "config-data") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.045628 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.095775 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.095810 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.399259 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.738505 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.743016 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerDied","Data":"5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1"} Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.743116 4979 scope.go:117] "RemoveContainer" containerID="e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.788229 4979 scope.go:117] "RemoveContainer" containerID="436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.793800 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.818800 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.846206 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: E0130 23:16:04.846967 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847076 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" Jan 30 23:16:04 crc kubenswrapper[4979]: E0130 23:16:04.847175 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847240 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847535 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847625 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.849086 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.851665 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.851710 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914289 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914344 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914399 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d24af8b-b86a-4604-82a5-e3d014dba7b5-logs\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914423 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d24af8b-b86a-4604-82a5-e3d014dba7b5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914446 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914464 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqtn\" (UniqueName: \"kubernetes.io/projected/5d24af8b-b86a-4604-82a5-e3d014dba7b5-kube-api-access-nbqtn\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914481 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-scripts\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016118 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016226 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016296 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d24af8b-b86a-4604-82a5-e3d014dba7b5-logs\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d24af8b-b86a-4604-82a5-e3d014dba7b5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016459 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqtn\" (UniqueName: \"kubernetes.io/projected/5d24af8b-b86a-4604-82a5-e3d014dba7b5-kube-api-access-nbqtn\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-scripts\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.017794 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d24af8b-b86a-4604-82a5-e3d014dba7b5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.018665 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d24af8b-b86a-4604-82a5-e3d014dba7b5-logs\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.023154 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.024893 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.025300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-scripts\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.033743 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.036375 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqtn\" (UniqueName: \"kubernetes.io/projected/5d24af8b-b86a-4604-82a5-e3d014dba7b5-kube-api-access-nbqtn\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.076749 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:05 crc kubenswrapper[4979]: E0130 23:16:05.077391 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.094710 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" path="/var/lib/kubelet/pods/e5c27922-b152-465f-b0fe-117e336c7ae0/volumes" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.171712 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.648742 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.755214 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d24af8b-b86a-4604-82a5-e3d014dba7b5","Type":"ContainerStarted","Data":"28c785f20f07dcb595b163436195aad2163f19866dc7189b0e361956b2a31eb2"} Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.299807 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.766550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d24af8b-b86a-4604-82a5-e3d014dba7b5","Type":"ContainerStarted","Data":"f65da1aabfac67d5bb9a1702c858a2b56192f3de77040842ca98eda9fbb25ac9"} Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.833198 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.938116 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.938450 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.940078 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.951933 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.015918 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.017333 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.017416 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.020458 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.782188 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d24af8b-b86a-4604-82a5-e3d014dba7b5","Type":"ContainerStarted","Data":"97b5cf56db6d6e2ed3b3f27849b1b2885b68150eb1cb49d92ee77f5261216a27"} Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.783139 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.783188 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.793578 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.806292 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.806266501 podStartE2EDuration="3.806266501s" podCreationTimestamp="2026-01-30 23:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:16:07.803809265 +0000 UTC m=+5763.765056328" watchObservedRunningTime="2026-01-30 23:16:07.806266501 +0000 UTC m=+5763.767513564" Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.615372 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.675323 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.797984 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" containerID="cri-o://f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33" gracePeriod=30 Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.798129 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" containerID="cri-o://11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce" gracePeriod=30 Jan 30 23:16:10 crc kubenswrapper[4979]: I0130 23:16:10.813509 4979 generic.go:334] "Generic (PLEG): container finished" podID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerID="11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce" exitCode=0 Jan 30 23:16:10 crc kubenswrapper[4979]: I0130 23:16:10.813577 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerDied","Data":"11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce"} Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.513258 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.821926 4979 generic.go:334] "Generic (PLEG): container finished" podID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerID="f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33" exitCode=0 Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.821966 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerDied","Data":"f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33"} Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.821991 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerDied","Data":"4426c069aa3a6213267e35e8b2382f441791df8002dd43fd176594900d983cdf"} Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.822000 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4426c069aa3a6213267e35e8b2382f441791df8002dd43fd176594900d983cdf" Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.877991 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.075942 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076118 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076156 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076208 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076262 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076296 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.077976 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.078181 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.085379 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g" (OuterVolumeSpecName: "kube-api-access-5kb8g") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "kube-api-access-5kb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.091789 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.100974 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts" (OuterVolumeSpecName: "scripts") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.149233 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179738 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179892 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179942 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179988 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.180014 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.208496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data" (OuterVolumeSpecName: "config-data") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.282257 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.833872 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.896288 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.908005 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.935591 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: E0130 23:16:12.936735 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.936766 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" Jan 30 23:16:12 crc kubenswrapper[4979]: E0130 23:16:12.936830 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.936843 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.942737 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.942850 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.946124 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.949289 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.954222 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998004 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998456 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998616 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88f999da-53cb-4370-ab43-2a6623aa6d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998659 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdz7f\" (UniqueName: \"kubernetes.io/projected/88f999da-53cb-4370-ab43-2a6623aa6d51-kube-api-access-jdz7f\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.079585 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" path="/var/lib/kubelet/pods/1f2217ce-8d18-43fb-a08f-f39144f5aeed/volumes" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100095 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88f999da-53cb-4370-ab43-2a6623aa6d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100122 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdz7f\" (UniqueName: \"kubernetes.io/projected/88f999da-53cb-4370-ab43-2a6623aa6d51-kube-api-access-jdz7f\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100162 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100190 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100794 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88f999da-53cb-4370-ab43-2a6623aa6d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.105765 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.106434 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.115460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.115715 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.118876 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdz7f\" (UniqueName: \"kubernetes.io/projected/88f999da-53cb-4370-ab43-2a6623aa6d51-kube-api-access-jdz7f\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.272674 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.722528 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:13 crc kubenswrapper[4979]: W0130 23:16:13.724334 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88f999da_53cb_4370_ab43_2a6623aa6d51.slice/crio-3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec WatchSource:0}: Error finding container 3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec: Status 404 returned error can't find the container with id 3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.849428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"88f999da-53cb-4370-ab43-2a6623aa6d51","Type":"ContainerStarted","Data":"3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec"} Jan 30 23:16:14 crc kubenswrapper[4979]: I0130 23:16:14.861132 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"88f999da-53cb-4370-ab43-2a6623aa6d51","Type":"ContainerStarted","Data":"6eb202b5be229d12aa3a7c54c1aa6e094afb7f2bff5cf1919ed376f6d4bb60e9"} Jan 30 23:16:15 crc kubenswrapper[4979]: I0130 23:16:15.876960 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"88f999da-53cb-4370-ab43-2a6623aa6d51","Type":"ContainerStarted","Data":"d9dd9f3b7244fbd0a67377eaa8636714f3e3b386eb714247a6af41121aae1a0d"} Jan 30 23:16:15 crc kubenswrapper[4979]: I0130 23:16:15.908942 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.908927277 podStartE2EDuration="3.908927277s" podCreationTimestamp="2026-01-30 23:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:16:15.906579993 +0000 UTC m=+5771.867827026" watchObservedRunningTime="2026-01-30 23:16:15.908927277 +0000 UTC m=+5771.870174310" Jan 30 23:16:16 crc kubenswrapper[4979]: I0130 23:16:16.069827 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:16 crc kubenswrapper[4979]: E0130 23:16:16.070219 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:16 crc kubenswrapper[4979]: I0130 23:16:16.983106 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 23:16:18 crc kubenswrapper[4979]: I0130 23:16:18.273013 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 23:16:23 crc kubenswrapper[4979]: I0130 23:16:23.560491 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 23:16:31 crc kubenswrapper[4979]: I0130 23:16:31.070590 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:31 crc kubenswrapper[4979]: E0130 23:16:31.071694 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:43 crc kubenswrapper[4979]: I0130 23:16:43.070771 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:43 crc kubenswrapper[4979]: E0130 23:16:43.072550 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:57 crc kubenswrapper[4979]: I0130 23:16:57.071802 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:57 crc kubenswrapper[4979]: E0130 23:16:57.072651 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:08 crc kubenswrapper[4979]: I0130 23:17:08.069894 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:08 crc kubenswrapper[4979]: E0130 23:17:08.071217 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:23 crc kubenswrapper[4979]: I0130 23:17:23.070785 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:23 crc kubenswrapper[4979]: E0130 23:17:23.072243 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:34 crc kubenswrapper[4979]: I0130 23:17:34.070878 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:34 crc kubenswrapper[4979]: E0130 23:17:34.072631 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:49 crc kubenswrapper[4979]: I0130 23:17:49.070376 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:49 crc kubenswrapper[4979]: E0130 23:17:49.071305 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.196256 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kssd2"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.198271 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.199854 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-k69pj" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.201742 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.210272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.244636 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-54q6d"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.257485 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.259311 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-54q6d"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.354792 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-lib\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.354937 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2524172b-c864-4a7f-8c66-ffd219fa7be6-scripts\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355070 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355087 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-log\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355265 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-log-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355301 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355572 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdn2v\" (UniqueName: \"kubernetes.io/projected/2524172b-c864-4a7f-8c66-ffd219fa7be6-kube-api-access-tdn2v\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355599 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-run\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-etc-ovs\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355890 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-scripts\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl98\" (UniqueName: \"kubernetes.io/projected/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-kube-api-access-ngl98\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457357 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-log\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-log-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457454 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdn2v\" (UniqueName: \"kubernetes.io/projected/2524172b-c864-4a7f-8c66-ffd219fa7be6-kube-api-access-tdn2v\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-run\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-etc-ovs\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457555 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-scripts\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457571 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngl98\" (UniqueName: \"kubernetes.io/projected/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-kube-api-access-ngl98\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457686 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-log\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457688 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457730 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-etc-ovs\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-lib\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457819 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-run\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457613 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-lib\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-log-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2524172b-c864-4a7f-8c66-ffd219fa7be6-scripts\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.459727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-scripts\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.460529 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2524172b-c864-4a7f-8c66-ffd219fa7be6-scripts\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.476085 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdn2v\" (UniqueName: \"kubernetes.io/projected/2524172b-c864-4a7f-8c66-ffd219fa7be6-kube-api-access-tdn2v\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.476766 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngl98\" (UniqueName: \"kubernetes.io/projected/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-kube-api-access-ngl98\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.594614 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.623487 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.070224 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:03 crc kubenswrapper[4979]: E0130 23:18:03.070858 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.083635 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.262298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2" event={"ID":"2524172b-c864-4a7f-8c66-ffd219fa7be6","Type":"ContainerStarted","Data":"917cbd53d76ffde1b05d915beffa69779c72378b9b6bdbd5ae98c2e41c7bd228"} Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.500431 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-54q6d"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.790705 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.792641 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.809421 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.884181 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-56vn2"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.885689 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.887474 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.892248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.892321 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.895085 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-56vn2"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994297 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggw2z\" (UniqueName: \"kubernetes.io/projected/927cfb5e-5147-4154-aad7-bd9d4aae47b2-kube-api-access-ggw2z\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994413 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927cfb5e-5147-4154-aad7-bd9d4aae47b2-config\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994669 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994746 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovn-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994772 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovs-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994928 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.996256 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.014012 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggw2z\" (UniqueName: \"kubernetes.io/projected/927cfb5e-5147-4154-aad7-bd9d4aae47b2-kube-api-access-ggw2z\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927cfb5e-5147-4154-aad7-bd9d4aae47b2-config\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovn-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096512 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovs-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096845 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovs-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096967 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovn-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.097160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927cfb5e-5147-4154-aad7-bd9d4aae47b2-config\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.119044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggw2z\" (UniqueName: \"kubernetes.io/projected/927cfb5e-5147-4154-aad7-bd9d4aae47b2-kube-api-access-ggw2z\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.143256 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.206245 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.286972 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2" event={"ID":"2524172b-c864-4a7f-8c66-ffd219fa7be6","Type":"ContainerStarted","Data":"31c5020918aba8d092a19e0b7ca5bbaebba679cb38d8dc662a860dbe0e160ff3"} Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.287367 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.290547 4979 generic.go:334] "Generic (PLEG): container finished" podID="5f8d6c92-62f8-427c-8208-cf3ba6d98af7" containerID="b81c51bfa60ca7e89875d502bf09c62c926e61ce67e71e34d854b9859205ea7c" exitCode=0 Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.290589 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerDied","Data":"b81c51bfa60ca7e89875d502bf09c62c926e61ce67e71e34d854b9859205ea7c"} Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.290612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerStarted","Data":"9f7defbe2e495cfe026a2c43f2ac9cad6d98d86a51e867fce732b4f6bd13016f"} Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.302368 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kssd2" podStartSLOduration=2.302347989 podStartE2EDuration="2.302347989s" podCreationTimestamp="2026-01-30 23:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:04.301398323 +0000 UTC m=+5880.262645356" watchObservedRunningTime="2026-01-30 23:18:04.302347989 +0000 UTC m=+5880.263595022" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.626354 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.707720 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-56vn2"] Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.271588 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.273594 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.276851 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.281178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.302645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerStarted","Data":"d2639429e456fa5a2ff6cd80de0c22162641631c2df51f416d2d9994e6717acf"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.302688 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerStarted","Data":"ef33eb93d9ee71c73bc2c045eb66e3e9eaddb1bdc39430e99ac2050a78298d92"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.302820 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.310637 4979 generic.go:334] "Generic (PLEG): container finished" podID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerID="2901952f949f2b6e5bf0bdfc295d7dcb142b237e525207eca8287fadd9dc45a0" exitCode=0 Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.310693 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-mp8qq" event={"ID":"cad393e9-51ee-4f44-976c-fb9c28487d67","Type":"ContainerDied","Data":"2901952f949f2b6e5bf0bdfc295d7dcb142b237e525207eca8287fadd9dc45a0"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.310716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-mp8qq" event={"ID":"cad393e9-51ee-4f44-976c-fb9c28487d67","Type":"ContainerStarted","Data":"a568a2475cb9c1d659819c3ad91a11115dff4d6329bfcfa402ac75e51b1e7009"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.319675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-56vn2" event={"ID":"927cfb5e-5147-4154-aad7-bd9d4aae47b2","Type":"ContainerStarted","Data":"8073cff7c73e8d5c03f0f19c3c46e337e58c1935232c0b7a368e8c519e538ab6"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.319725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-56vn2" event={"ID":"927cfb5e-5147-4154-aad7-bd9d4aae47b2","Type":"ContainerStarted","Data":"275824f36c95e1b9064e3c9aa9149b5eb632f11db01ec1bdc9f820ee29b1dcd6"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.330016 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-54q6d" podStartSLOduration=3.329995517 podStartE2EDuration="3.329995517s" podCreationTimestamp="2026-01-30 23:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:05.326978175 +0000 UTC m=+5881.288225198" watchObservedRunningTime="2026-01-30 23:18:05.329995517 +0000 UTC m=+5881.291242550" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.358974 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-56vn2" podStartSLOduration=2.358952921 podStartE2EDuration="2.358952921s" podCreationTimestamp="2026-01-30 23:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:05.354368316 +0000 UTC m=+5881.315615349" watchObservedRunningTime="2026-01-30 23:18:05.358952921 +0000 UTC m=+5881.320199954" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.430017 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.430130 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.531955 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.532006 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.532912 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.550510 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.588565 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.065996 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:18:06 crc kubenswrapper[4979]: W0130 23:18:06.075260 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa0fc85_dd34_469d_a6b4_500d9e17e8cd.slice/crio-38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209 WatchSource:0}: Error finding container 38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209: Status 404 returned error can't find the container with id 38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209 Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.330233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerStarted","Data":"295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2"} Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.330508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerStarted","Data":"38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209"} Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.330648 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.354519 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-22c0-account-create-update-pwzqj" podStartSLOduration=1.3545013799999999 podStartE2EDuration="1.35450138s" podCreationTimestamp="2026-01-30 23:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:06.345319231 +0000 UTC m=+5882.306566264" watchObservedRunningTime="2026-01-30 23:18:06.35450138 +0000 UTC m=+5882.315748403" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.690874 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.759120 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"cad393e9-51ee-4f44-976c-fb9c28487d67\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.759247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"cad393e9-51ee-4f44-976c-fb9c28487d67\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.761180 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cad393e9-51ee-4f44-976c-fb9c28487d67" (UID: "cad393e9-51ee-4f44-976c-fb9c28487d67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.765797 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx" (OuterVolumeSpecName: "kube-api-access-f7glx") pod "cad393e9-51ee-4f44-976c-fb9c28487d67" (UID: "cad393e9-51ee-4f44-976c-fb9c28487d67"). InnerVolumeSpecName "kube-api-access-f7glx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.861349 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.861379 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.341370 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.341356 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-mp8qq" event={"ID":"cad393e9-51ee-4f44-976c-fb9c28487d67","Type":"ContainerDied","Data":"a568a2475cb9c1d659819c3ad91a11115dff4d6329bfcfa402ac75e51b1e7009"} Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.341518 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a568a2475cb9c1d659819c3ad91a11115dff4d6329bfcfa402ac75e51b1e7009" Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.343943 4979 generic.go:334] "Generic (PLEG): container finished" podID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerID="295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2" exitCode=0 Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.343989 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerDied","Data":"295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2"} Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.733635 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.800226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.800369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.800832 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" (UID: "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.811394 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn" (OuterVolumeSpecName: "kube-api-access-wpqgn") pod "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" (UID: "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd"). InnerVolumeSpecName "kube-api-access-wpqgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.902406 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.902446 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.048003 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.055245 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.064123 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.079639 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" path="/var/lib/kubelet/pods/cd1984c3-c561-48d8-8e99-a596088b25b7/volumes" Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.080267 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.361081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerDied","Data":"38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209"} Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.361407 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209" Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.361297 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.087931 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" path="/var/lib/kubelet/pods/244815ff-89c6-49ac-91e1-4d8f44de6066/volumes" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.692454 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:18:11 crc kubenswrapper[4979]: E0130 23:18:11.692957 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerName="mariadb-database-create" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.692983 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerName="mariadb-database-create" Jan 30 23:18:11 crc kubenswrapper[4979]: E0130 23:18:11.693026 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerName="mariadb-account-create-update" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.693058 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerName="mariadb-account-create-update" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.693301 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerName="mariadb-account-create-update" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.693329 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerName="mariadb-database-create" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.694239 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.703981 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.759808 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.759935 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.878788 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.879115 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.881140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.899050 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.029022 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.169934 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.174810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.185856 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.288534 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.288601 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.288918 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391155 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391205 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391836 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391891 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.407636 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.497497 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.540233 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.839408 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.842057 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.844716 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.852364 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.901281 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.901403 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.003302 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.003420 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.004275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.013685 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.021935 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.170775 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.410331 4979 generic.go:334] "Generic (PLEG): container finished" podID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerID="ab9d6fd9b6c78c1609831430497301a395dbc97dc2a1cc5b8ce36db173127e64" exitCode=0 Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.410417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vn66f" event={"ID":"5d8f6093-1ce3-4cb4-829a-71a3aaded46f","Type":"ContainerDied","Data":"ab9d6fd9b6c78c1609831430497301a395dbc97dc2a1cc5b8ce36db173127e64"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.410637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vn66f" event={"ID":"5d8f6093-1ce3-4cb4-829a-71a3aaded46f","Type":"ContainerStarted","Data":"3fb03c5a5aa72ebda057704e5eb39535a99c2e8e757bfc8a37d44531abbcfa6f"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.413646 4979 generic.go:334] "Generic (PLEG): container finished" podID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerID="6d616084358c968a0cef1f0dabd45c508a2b560e879d97f236802954ea33a0fb" exitCode=0 Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.413875 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"6d616084358c968a0cef1f0dabd45c508a2b560e879d97f236802954ea33a0fb"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.413921 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerStarted","Data":"4aed268deef5dedad8a08efebfaa72dcd92b76e589a8cdf33b18ee34f3580454"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.677906 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:18:13 crc kubenswrapper[4979]: W0130 23:18:13.685074 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9549a4c7_2fb8_4f18_a7d3_902949e90d8c.slice/crio-9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b WatchSource:0}: Error finding container 9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b: Status 404 returned error can't find the container with id 9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.071007 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:14 crc kubenswrapper[4979]: E0130 23:18:14.071571 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.423973 4979 generic.go:334] "Generic (PLEG): container finished" podID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerID="4585a42ea864cc4af87b4f754b0c7b9540e84f1af59fb62e004a04f42ca82ee5" exitCode=0 Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.424198 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ff98-account-create-update-szcww" event={"ID":"9549a4c7-2fb8-4f18-a7d3-902949e90d8c","Type":"ContainerDied","Data":"4585a42ea864cc4af87b4f754b0c7b9540e84f1af59fb62e004a04f42ca82ee5"} Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.424242 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ff98-account-create-update-szcww" event={"ID":"9549a4c7-2fb8-4f18-a7d3-902949e90d8c","Type":"ContainerStarted","Data":"9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b"} Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.428213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerStarted","Data":"3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667"} Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.832249 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.944398 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.944618 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.945303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d8f6093-1ce3-4cb4-829a-71a3aaded46f" (UID: "5d8f6093-1ce3-4cb4-829a-71a3aaded46f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.957571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h" (OuterVolumeSpecName: "kube-api-access-2gw6h") pod "5d8f6093-1ce3-4cb4-829a-71a3aaded46f" (UID: "5d8f6093-1ce3-4cb4-829a-71a3aaded46f"). InnerVolumeSpecName "kube-api-access-2gw6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.031544 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.044417 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.047299 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.047351 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.090761 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" path="/var/lib/kubelet/pods/7e90fa06-119c-454e-9f4e-da0b5bff99bb/volumes" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.457074 4979 generic.go:334] "Generic (PLEG): container finished" podID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerID="3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667" exitCode=0 Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.457239 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667"} Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.462172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vn66f" event={"ID":"5d8f6093-1ce3-4cb4-829a-71a3aaded46f","Type":"ContainerDied","Data":"3fb03c5a5aa72ebda057704e5eb39535a99c2e8e757bfc8a37d44531abbcfa6f"} Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.462242 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb03c5a5aa72ebda057704e5eb39535a99c2e8e757bfc8a37d44531abbcfa6f" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.462195 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.900704 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.964819 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.965369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.966480 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9549a4c7-2fb8-4f18-a7d3-902949e90d8c" (UID: "9549a4c7-2fb8-4f18-a7d3-902949e90d8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.970089 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh" (OuterVolumeSpecName: "kube-api-access-zwldh") pod "9549a4c7-2fb8-4f18-a7d3-902949e90d8c" (UID: "9549a4c7-2fb8-4f18-a7d3-902949e90d8c"). InnerVolumeSpecName "kube-api-access-zwldh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.067665 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.067696 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.470226 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ff98-account-create-update-szcww" event={"ID":"9549a4c7-2fb8-4f18-a7d3-902949e90d8c","Type":"ContainerDied","Data":"9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b"} Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.470261 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.470268 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.472131 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerStarted","Data":"5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34"} Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.491508 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7k4g" podStartSLOduration=2.020111714 podStartE2EDuration="4.491489543s" podCreationTimestamp="2026-01-30 23:18:12 +0000 UTC" firstStartedPulling="2026-01-30 23:18:13.415738484 +0000 UTC m=+5889.376985517" lastFinishedPulling="2026-01-30 23:18:15.887116293 +0000 UTC m=+5891.848363346" observedRunningTime="2026-01-30 23:18:16.488743779 +0000 UTC m=+5892.449990812" watchObservedRunningTime="2026-01-30 23:18:16.491489543 +0000 UTC m=+5892.452736576" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.448107 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-657b9576cf-gswsb"] Jan 30 23:18:18 crc kubenswrapper[4979]: E0130 23:18:18.448870 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerName="mariadb-database-create" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.448888 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerName="mariadb-database-create" Jan 30 23:18:18 crc kubenswrapper[4979]: E0130 23:18:18.448920 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerName="mariadb-account-create-update" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.448927 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerName="mariadb-account-create-update" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.449150 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerName="mariadb-account-create-update" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.449165 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerName="mariadb-database-create" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.450612 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.453090 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.453477 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-9f6bb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.453850 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.459576 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-657b9576cf-gswsb"] Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520085 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-octavia-run\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520421 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520447 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data-merged\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-scripts\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520552 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-combined-ca-bundle\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621704 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-scripts\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621760 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-combined-ca-bundle\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-octavia-run\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621878 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621896 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data-merged\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.622371 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data-merged\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.622647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-octavia-run\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.629958 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-combined-ca-bundle\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.629968 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-scripts\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.630634 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.775309 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:19 crc kubenswrapper[4979]: I0130 23:18:19.373241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-657b9576cf-gswsb"] Jan 30 23:18:19 crc kubenswrapper[4979]: I0130 23:18:19.496527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerStarted","Data":"b14bfe7a25a727f299de9143fecc9a61989a8fc979a15a572320f6b95cdd47d8"} Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.498182 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.498574 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.552358 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.600451 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.793946 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:24 crc kubenswrapper[4979]: I0130 23:18:24.547597 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7k4g" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" containerID="cri-o://5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34" gracePeriod=2 Jan 30 23:18:25 crc kubenswrapper[4979]: I0130 23:18:25.571290 4979 generic.go:334] "Generic (PLEG): container finished" podID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerID="5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34" exitCode=0 Jan 30 23:18:25 crc kubenswrapper[4979]: I0130 23:18:25.571351 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34"} Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.318025 4979 scope.go:117] "RemoveContainer" containerID="958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.666372 4979 scope.go:117] "RemoveContainer" containerID="ce550ab1c6e408aea10d06173b7920d5c55fe0078943da671c3598da2665ca61" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.774006 4979 scope.go:117] "RemoveContainer" containerID="14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.829466 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.892112 4979 scope.go:117] "RemoveContainer" containerID="519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.910404 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"978132d6-bbdd-4d38-b69d-8713bafb726b\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.910997 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"978132d6-bbdd-4d38-b69d-8713bafb726b\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.911087 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"978132d6-bbdd-4d38-b69d-8713bafb726b\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.911755 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities" (OuterVolumeSpecName: "utilities") pod "978132d6-bbdd-4d38-b69d-8713bafb726b" (UID: "978132d6-bbdd-4d38-b69d-8713bafb726b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.931390 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b" (OuterVolumeSpecName: "kube-api-access-c4d5b") pod "978132d6-bbdd-4d38-b69d-8713bafb726b" (UID: "978132d6-bbdd-4d38-b69d-8713bafb726b"). InnerVolumeSpecName "kube-api-access-c4d5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.948654 4979 scope.go:117] "RemoveContainer" containerID="949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.972904 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "978132d6-bbdd-4d38-b69d-8713bafb726b" (UID: "978132d6-bbdd-4d38-b69d-8713bafb726b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.013915 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.013973 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.013994 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.609231 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc255f37-2650-4c57-b4d0-4709be5a5d25" containerID="9d0806b314d8bcb1261b5c6e83a0b50664719486330b42e0947e70be649acf43" exitCode=0 Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.609294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerDied","Data":"9d0806b314d8bcb1261b5c6e83a0b50664719486330b42e0947e70be649acf43"} Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.615990 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"4aed268deef5dedad8a08efebfaa72dcd92b76e589a8cdf33b18ee34f3580454"} Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.616072 4979 scope.go:117] "RemoveContainer" containerID="5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.616151 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.637635 4979 scope.go:117] "RemoveContainer" containerID="3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.671707 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.681818 4979 scope.go:117] "RemoveContainer" containerID="6d616084358c968a0cef1f0dabd45c508a2b560e879d97f236802954ea33a0fb" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.683105 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.062868 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.078858 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:29 crc kubenswrapper[4979]: E0130 23:18:29.081142 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.103526 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" path="/var/lib/kubelet/pods/978132d6-bbdd-4d38-b69d-8713bafb726b/volumes" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.104512 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.628602 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerStarted","Data":"da34fa743d281aaa91d004c0354f1bd32f61a83ed05262f3a63a8ccacc65f81f"} Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.629011 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerStarted","Data":"2129a5ba5ce7897b561d4a67a7dc62dda75bf684cb386fbbd14c2252d29885db"} Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.629057 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.629071 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.649428 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-657b9576cf-gswsb" podStartSLOduration=3.315548708 podStartE2EDuration="11.649405561s" podCreationTimestamp="2026-01-30 23:18:18 +0000 UTC" firstStartedPulling="2026-01-30 23:18:19.381852044 +0000 UTC m=+5895.343099087" lastFinishedPulling="2026-01-30 23:18:27.715708877 +0000 UTC m=+5903.676955940" observedRunningTime="2026-01-30 23:18:29.64531915 +0000 UTC m=+5905.606566183" watchObservedRunningTime="2026-01-30 23:18:29.649405561 +0000 UTC m=+5905.610652594" Jan 30 23:18:31 crc kubenswrapper[4979]: I0130 23:18:31.093789 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" path="/var/lib/kubelet/pods/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e/volumes" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.632898 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kssd2" podUID="2524172b-c864-4a7f-8c66-ffd219fa7be6" containerName="ovn-controller" probeResult="failure" output=< Jan 30 23:18:37 crc kubenswrapper[4979]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 23:18:37 crc kubenswrapper[4979]: > Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.666054 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.672532 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.711707 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.812754 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:37 crc kubenswrapper[4979]: E0130 23:18:37.816881 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-content" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.816920 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-content" Jan 30 23:18:37 crc kubenswrapper[4979]: E0130 23:18:37.817003 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.817017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" Jan 30 23:18:37 crc kubenswrapper[4979]: E0130 23:18:37.817077 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-utilities" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.817088 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-utilities" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.817803 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.822824 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.828020 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.839310 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.950193 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.951915 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.951971 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.952027 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.952106 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.952134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053317 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053478 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053522 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053586 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054630 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054863 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054904 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.055726 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.075431 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.152419 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.718060 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:39 crc kubenswrapper[4979]: I0130 23:18:39.785864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerStarted","Data":"33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd"} Jan 30 23:18:39 crc kubenswrapper[4979]: I0130 23:18:39.787146 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerStarted","Data":"17b815f61ca5cd4ba6b905cd4cd028bc1fba77ac29df1fa57a4af74954b44888"} Jan 30 23:18:39 crc kubenswrapper[4979]: I0130 23:18:39.812440 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kssd2-config-khdnr" podStartSLOduration=2.8124176 podStartE2EDuration="2.8124176s" podCreationTimestamp="2026-01-30 23:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:39.804759642 +0000 UTC m=+5915.766006685" watchObservedRunningTime="2026-01-30 23:18:39.8124176 +0000 UTC m=+5915.773664633" Jan 30 23:18:40 crc kubenswrapper[4979]: I0130 23:18:40.798921 4979 generic.go:334] "Generic (PLEG): container finished" podID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerID="33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd" exitCode=0 Jan 30 23:18:40 crc kubenswrapper[4979]: I0130 23:18:40.799354 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerDied","Data":"33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd"} Jan 30 23:18:41 crc kubenswrapper[4979]: I0130 23:18:41.701586 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.136269 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251731 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251822 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251853 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251866 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251913 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251939 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run" (OuterVolumeSpecName: "var-run") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252404 4979 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252422 4979 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252431 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252462 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252656 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts" (OuterVolumeSpecName: "scripts") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.257423 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz" (OuterVolumeSpecName: "kube-api-access-5whvz") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "kube-api-access-5whvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.353896 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.353938 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.353947 4979 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.636646 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.820235 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerDied","Data":"17b815f61ca5cd4ba6b905cd4cd028bc1fba77ac29df1fa57a4af74954b44888"} Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.820278 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b815f61ca5cd4ba6b905cd4cd028bc1fba77ac29df1fa57a4af74954b44888" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.820300 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.891147 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.901311 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:43 crc kubenswrapper[4979]: I0130 23:18:43.069930 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:43 crc kubenswrapper[4979]: E0130 23:18:43.070430 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:43 crc kubenswrapper[4979]: I0130 23:18:43.081314 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" path="/var/lib/kubelet/pods/9a7e245e-175c-4fb3-b0de-b3d99a33548c/volumes" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.627196 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:50 crc kubenswrapper[4979]: E0130 23:18:50.628092 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerName="ovn-config" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.628105 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerName="ovn-config" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.628478 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerName="ovn-config" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.629520 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.639421 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.639632 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.639818 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.644306 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674168 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-scripts\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e59aa6da-4048-4cf0-add7-cb98472425cb-hm-ports\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674385 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data-merged\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776377 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data-merged\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-scripts\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776570 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e59aa6da-4048-4cf0-add7-cb98472425cb-hm-ports\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.777070 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data-merged\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.777399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e59aa6da-4048-4cf0-add7-cb98472425cb-hm-ports\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.782188 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-scripts\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.795333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.960002 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.201474 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.203852 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.209825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.214775 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.287841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.288133 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.389601 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.389954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.390396 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.394924 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.531841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.570094 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:51 crc kubenswrapper[4979]: W0130 23:18:51.587375 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59aa6da_4048_4cf0_add7_cb98472425cb.slice/crio-bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31 WatchSource:0}: Error finding container bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31: Status 404 returned error can't find the container with id bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31 Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.703782 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.911952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerStarted","Data":"bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31"} Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.980885 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:18:51 crc kubenswrapper[4979]: W0130 23:18:51.985384 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56781d53_1264_465c_bee8_378a284703f7.slice/crio-c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517 WatchSource:0}: Error finding container c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517: Status 404 returned error can't find the container with id c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517 Jan 30 23:18:52 crc kubenswrapper[4979]: I0130 23:18:52.925156 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerStarted","Data":"c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517"} Jan 30 23:18:53 crc kubenswrapper[4979]: I0130 23:18:53.936253 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerStarted","Data":"f1033d956b8cb5eb27d9bfcbb895ec0576f18b1da54b2ef1eb908919ff379383"} Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.654777 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.656660 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.661710 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.661762 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.662105 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.665680 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808309 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808389 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e7a38a33-332b-484f-a620-5ecc2b52d9d8-hm-ports\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808513 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data-merged\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808595 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-scripts\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808632 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-combined-ca-bundle\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808686 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-amphora-certs\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911019 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data-merged\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911155 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-scripts\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911189 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-combined-ca-bundle\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911242 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-amphora-certs\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911309 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e7a38a33-332b-484f-a620-5ecc2b52d9d8-hm-ports\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data-merged\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.912647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e7a38a33-332b-484f-a620-5ecc2b52d9d8-hm-ports\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.919854 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-amphora-certs\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.923654 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-combined-ca-bundle\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.924000 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-scripts\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.925113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.959287 4979 generic.go:334] "Generic (PLEG): container finished" podID="e59aa6da-4048-4cf0-add7-cb98472425cb" containerID="f1033d956b8cb5eb27d9bfcbb895ec0576f18b1da54b2ef1eb908919ff379383" exitCode=0 Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.959334 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerDied","Data":"f1033d956b8cb5eb27d9bfcbb895ec0576f18b1da54b2ef1eb908919ff379383"} Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.973204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.069915 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:56 crc kubenswrapper[4979]: E0130 23:18:56.070173 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.559128 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.659004 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.660914 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.663526 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.672131 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730527 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730582 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730608 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.832602 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.832670 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.833104 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.833189 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.833352 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.840056 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.840460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.840608 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.973770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerStarted","Data":"7314c3a10696c0162279a53d523ca81e2dc33745775732139a4f557e7214ce0f"} Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.995478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.290860 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-89w6g"] Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.295963 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.299747 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.304606 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-89w6g"] Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.304662 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446366 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data-merged\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446430 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-combined-ca-bundle\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446467 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-scripts\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446527 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-amphora-certs\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/82154ec9-1201-41a2-a0f2-904b2db3c497-hm-ports\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.447152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553387 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553756 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data-merged\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553795 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-combined-ca-bundle\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553824 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-scripts\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-amphora-certs\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553917 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/82154ec9-1201-41a2-a0f2-904b2db3c497-hm-ports\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.563563 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data-merged\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.572082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/82154ec9-1201-41a2-a0f2-904b2db3c497-hm-ports\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.582892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-combined-ca-bundle\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.583816 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-amphora-certs\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.583908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-scripts\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.584873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.614495 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.922200 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.989932 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerStarted","Data":"5253fb3f28e3f279aeaa5586df71c111f0fa1f5d0fc7f40bb780f79332a13f31"} Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.041912 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-m8s2f"] Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.043678 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.046286 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.046507 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.065937 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-m8s2f"] Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168321 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-amphora-certs\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168397 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data-merged\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/81ae9dc0-5b82-4990-878a-9570fc849c26-hm-ports\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168783 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168895 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-scripts\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168939 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-combined-ca-bundle\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271130 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/81ae9dc0-5b82-4990-878a-9570fc849c26-hm-ports\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-scripts\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-combined-ca-bundle\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-amphora-certs\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data-merged\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data-merged\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.272764 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/81ae9dc0-5b82-4990-878a-9570fc849c26-hm-ports\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.278455 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.278552 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-amphora-certs\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.278838 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-scripts\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.279929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-combined-ca-bundle\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.380346 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:59 crc kubenswrapper[4979]: I0130 23:18:59.001436 4979 generic.go:334] "Generic (PLEG): container finished" podID="e7a38a33-332b-484f-a620-5ecc2b52d9d8" containerID="5253fb3f28e3f279aeaa5586df71c111f0fa1f5d0fc7f40bb780f79332a13f31" exitCode=0 Jan 30 23:18:59 crc kubenswrapper[4979]: I0130 23:18:59.001481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerDied","Data":"5253fb3f28e3f279aeaa5586df71c111f0fa1f5d0fc7f40bb780f79332a13f31"} Jan 30 23:18:59 crc kubenswrapper[4979]: I0130 23:18:59.810148 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:19:05 crc kubenswrapper[4979]: I0130 23:19:05.061243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerStarted","Data":"77ea6560b514db80edcd2b1a784559cc0b05150e8bf7ea65bca5ae3812975520"} Jan 30 23:19:06 crc kubenswrapper[4979]: I0130 23:19:06.216654 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-m8s2f"] Jan 30 23:19:06 crc kubenswrapper[4979]: W0130 23:19:06.224427 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9dc0_5b82_4990_878a_9570fc849c26.slice/crio-458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da WatchSource:0}: Error finding container 458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da: Status 404 returned error can't find the container with id 458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da Jan 30 23:19:06 crc kubenswrapper[4979]: I0130 23:19:06.415266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-89w6g"] Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.081565 4979 generic.go:334] "Generic (PLEG): container finished" podID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerID="ccc43b745db314daf28ae463940cf548663352e7673aec67c6df25622cd0610d" exitCode=0 Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.081686 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerDied","Data":"ccc43b745db314daf28ae463940cf548663352e7673aec67c6df25622cd0610d"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.085099 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerStarted","Data":"7622d5d9245be933f5ef098f0dba2f280e7a0c263e201fdb4aaa77a617d21abe"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.085502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.087344 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerStarted","Data":"916e5eaacdcd1ede31ba8014d8722ea3d637e76447806973be824b09129f6af2"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.089563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerStarted","Data":"75046a3794d2dc69ec2ac4600f1a4a5d7fd9773b1a17f454be31058128a46988"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.089750 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.091272 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerStarted","Data":"458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.093095 4979 generic.go:334] "Generic (PLEG): container finished" podID="56781d53-1264-465c-bee8-378a284703f7" containerID="a4eb0f4089220dbf1416ca0ec0eb59b035c4c9519b93e961db07573db46163c3" exitCode=0 Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.093130 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerDied","Data":"a4eb0f4089220dbf1416ca0ec0eb59b035c4c9519b93e961db07573db46163c3"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.131133 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-pbxbw" podStartSLOduration=12.131115654 podStartE2EDuration="12.131115654s" podCreationTimestamp="2026-01-30 23:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:19:07.120972899 +0000 UTC m=+5943.082219932" watchObservedRunningTime="2026-01-30 23:19:07.131115654 +0000 UTC m=+5943.092362687" Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.198314 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-p7ttv" podStartSLOduration=3.490749266 podStartE2EDuration="17.198294842s" podCreationTimestamp="2026-01-30 23:18:50 +0000 UTC" firstStartedPulling="2026-01-30 23:18:51.589609663 +0000 UTC m=+5927.550856696" lastFinishedPulling="2026-01-30 23:19:05.297155219 +0000 UTC m=+5941.258402272" observedRunningTime="2026-01-30 23:19:07.157849217 +0000 UTC m=+5943.119096250" watchObservedRunningTime="2026-01-30 23:19:07.198294842 +0000 UTC m=+5943.159541875" Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.123386 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerStarted","Data":"e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f"} Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.130172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerStarted","Data":"ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91"} Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.133384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerStarted","Data":"f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df"} Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.178000 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-4bcmq" podStartSLOduration=13.177966311 podStartE2EDuration="13.177966311s" podCreationTimestamp="2026-01-30 23:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:19:09.177115608 +0000 UTC m=+5945.138362631" watchObservedRunningTime="2026-01-30 23:19:09.177966311 +0000 UTC m=+5945.139213334" Jan 30 23:19:10 crc kubenswrapper[4979]: E0130 23:19:10.606680 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82154ec9_1201_41a2_a0f2_904b2db3c497.slice/crio-conmon-f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9dc0_5b82_4990_878a_9570fc849c26.slice/crio-e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9dc0_5b82_4990_878a_9570fc849c26.slice/crio-conmon-e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.069748 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:11 crc kubenswrapper[4979]: E0130 23:19:11.070411 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.155984 4979 generic.go:334] "Generic (PLEG): container finished" podID="81ae9dc0-5b82-4990-878a-9570fc849c26" containerID="e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f" exitCode=0 Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.156083 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerDied","Data":"e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f"} Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.157879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerStarted","Data":"319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8"} Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.160076 4979 generic.go:334] "Generic (PLEG): container finished" podID="82154ec9-1201-41a2-a0f2-904b2db3c497" containerID="f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df" exitCode=0 Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.160102 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerDied","Data":"f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df"} Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.255845 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" podStartSLOduration=2.243274556 podStartE2EDuration="20.254074071s" podCreationTimestamp="2026-01-30 23:18:51 +0000 UTC" firstStartedPulling="2026-01-30 23:18:51.987580465 +0000 UTC m=+5927.948827498" lastFinishedPulling="2026-01-30 23:19:09.99837998 +0000 UTC m=+5945.959627013" observedRunningTime="2026-01-30 23:19:11.213449661 +0000 UTC m=+5947.174696724" watchObservedRunningTime="2026-01-30 23:19:11.254074071 +0000 UTC m=+5947.215321104" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.175864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerStarted","Data":"e6af351ae99b8b8f7dbae150cb9b5f8e2b383e77c428dd70b1380c48df8f3b87"} Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.177686 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-m8s2f" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.180610 4979 generic.go:334] "Generic (PLEG): container finished" podID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerID="ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91" exitCode=0 Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.180696 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerDied","Data":"ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91"} Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.183859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerStarted","Data":"06b2f47bbce1e28ded085c3e47b4ed48b0fa9a310cd07751bb3d2bfa55db20fe"} Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.184902 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.229139 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-89w6g" podStartSLOduration=13.643609945 podStartE2EDuration="15.229119704s" podCreationTimestamp="2026-01-30 23:18:57 +0000 UTC" firstStartedPulling="2026-01-30 23:19:06.437125418 +0000 UTC m=+5942.398372451" lastFinishedPulling="2026-01-30 23:19:08.022635177 +0000 UTC m=+5943.983882210" observedRunningTime="2026-01-30 23:19:12.221650432 +0000 UTC m=+5948.182897465" watchObservedRunningTime="2026-01-30 23:19:12.229119704 +0000 UTC m=+5948.190366747" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.230995 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-m8s2f" podStartSLOduration=12.442562362 podStartE2EDuration="14.230983254s" podCreationTimestamp="2026-01-30 23:18:58 +0000 UTC" firstStartedPulling="2026-01-30 23:19:06.230966267 +0000 UTC m=+5942.192213300" lastFinishedPulling="2026-01-30 23:19:08.019387169 +0000 UTC m=+5943.980634192" observedRunningTime="2026-01-30 23:19:12.204495187 +0000 UTC m=+5948.165742220" watchObservedRunningTime="2026-01-30 23:19:12.230983254 +0000 UTC m=+5948.192230287" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.589661 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696421 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696630 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696685 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.701495 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts" (OuterVolumeSpecName: "scripts") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.702139 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data" (OuterVolumeSpecName: "config-data") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.721447 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.727487 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799736 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799775 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799785 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799793 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:14 crc kubenswrapper[4979]: I0130 23:19:14.208355 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerDied","Data":"77ea6560b514db80edcd2b1a784559cc0b05150e8bf7ea65bca5ae3812975520"} Jan 30 23:19:14 crc kubenswrapper[4979]: I0130 23:19:14.208418 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:19:14 crc kubenswrapper[4979]: I0130 23:19:14.208420 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ea6560b514db80edcd2b1a784559cc0b05150e8bf7ea65bca5ae3812975520" Jan 30 23:19:20 crc kubenswrapper[4979]: I0130 23:19:20.991323 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:19:23 crc kubenswrapper[4979]: I0130 23:19:23.069593 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:23 crc kubenswrapper[4979]: E0130 23:19:23.070105 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.166141 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:25 crc kubenswrapper[4979]: E0130 23:19:25.167944 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="init" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.168243 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="init" Jan 30 23:19:25 crc kubenswrapper[4979]: E0130 23:19:25.168265 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="octavia-db-sync" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.168272 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="octavia-db-sync" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.168491 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="octavia-db-sync" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.169960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.174902 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.339171 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.339284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.339311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441410 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441962 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.442057 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.460755 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.501948 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.002852 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.045080 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.332943 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" exitCode=0 Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.333273 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8"} Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.333307 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerStarted","Data":"96ab71d86d8b333dc2defe343e0a32695ff14009ef2d6da9df54b0ddd55b9773"} Jan 30 23:19:27 crc kubenswrapper[4979]: I0130 23:19:27.345380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerStarted","Data":"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3"} Jan 30 23:19:27 crc kubenswrapper[4979]: I0130 23:19:27.673835 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.047149 4979 scope.go:117] "RemoveContainer" containerID="146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4" Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.355476 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" exitCode=0 Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.355542 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3"} Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.414447 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-m8s2f" Jan 30 23:19:29 crc kubenswrapper[4979]: I0130 23:19:29.375498 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerStarted","Data":"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e"} Jan 30 23:19:29 crc kubenswrapper[4979]: I0130 23:19:29.405709 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2kp2z" podStartSLOduration=1.977104807 podStartE2EDuration="4.405684136s" podCreationTimestamp="2026-01-30 23:19:25 +0000 UTC" firstStartedPulling="2026-01-30 23:19:26.33520018 +0000 UTC m=+5962.296447213" lastFinishedPulling="2026-01-30 23:19:28.763779509 +0000 UTC m=+5964.725026542" observedRunningTime="2026-01-30 23:19:29.392749465 +0000 UTC m=+5965.353996498" watchObservedRunningTime="2026-01-30 23:19:29.405684136 +0000 UTC m=+5965.366931169" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.076411 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:35 crc kubenswrapper[4979]: E0130 23:19:35.077484 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.502944 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.503393 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.573142 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:36 crc kubenswrapper[4979]: I0130 23:19:36.493426 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:36 crc kubenswrapper[4979]: I0130 23:19:36.541703 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:38 crc kubenswrapper[4979]: I0130 23:19:38.460430 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2kp2z" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" containerID="cri-o://790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" gracePeriod=2 Jan 30 23:19:38 crc kubenswrapper[4979]: I0130 23:19:38.985498 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.023658 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.023793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.023880 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.029265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg" (OuterVolumeSpecName: "kube-api-access-2fzsg") pod "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" (UID: "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc"). InnerVolumeSpecName "kube-api-access-2fzsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.033141 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities" (OuterVolumeSpecName: "utilities") pod "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" (UID: "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.070901 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" (UID: "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.126591 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.126635 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.126652 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472101 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" exitCode=0 Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472143 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e"} Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"96ab71d86d8b333dc2defe343e0a32695ff14009ef2d6da9df54b0ddd55b9773"} Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472189 4979 scope.go:117] "RemoveContainer" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472184 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.496186 4979 scope.go:117] "RemoveContainer" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.506060 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.517207 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.520152 4979 scope.go:117] "RemoveContainer" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.556853 4979 scope.go:117] "RemoveContainer" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" Jan 30 23:19:39 crc kubenswrapper[4979]: E0130 23:19:39.557320 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e\": container with ID starting with 790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e not found: ID does not exist" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557353 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e"} err="failed to get container status \"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e\": rpc error: code = NotFound desc = could not find container \"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e\": container with ID starting with 790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e not found: ID does not exist" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557374 4979 scope.go:117] "RemoveContainer" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" Jan 30 23:19:39 crc kubenswrapper[4979]: E0130 23:19:39.557668 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3\": container with ID starting with 4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3 not found: ID does not exist" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557733 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3"} err="failed to get container status \"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3\": rpc error: code = NotFound desc = could not find container \"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3\": container with ID starting with 4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3 not found: ID does not exist" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557778 4979 scope.go:117] "RemoveContainer" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" Jan 30 23:19:39 crc kubenswrapper[4979]: E0130 23:19:39.558238 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8\": container with ID starting with 9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8 not found: ID does not exist" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.558288 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8"} err="failed to get container status \"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8\": rpc error: code = NotFound desc = could not find container \"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8\": container with ID starting with 9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8 not found: ID does not exist" Jan 30 23:19:41 crc kubenswrapper[4979]: I0130 23:19:41.081644 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" path="/var/lib/kubelet/pods/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc/volumes" Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.244341 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.245270 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" containerID="cri-o://319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8" gracePeriod=30 Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.584962 4979 generic.go:334] "Generic (PLEG): container finished" podID="56781d53-1264-465c-bee8-378a284703f7" containerID="319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8" exitCode=0 Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.585244 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerDied","Data":"319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8"} Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.868082 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.932260 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"56781d53-1264-465c-bee8-378a284703f7\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.932937 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"56781d53-1264-465c-bee8-378a284703f7\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.961731 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "56781d53-1264-465c-bee8-378a284703f7" (UID: "56781d53-1264-465c-bee8-378a284703f7"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.963610 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "56781d53-1264-465c-bee8-378a284703f7" (UID: "56781d53-1264-465c-bee8-378a284703f7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.035364 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.035412 4979 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.601753 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerDied","Data":"c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517"} Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.601827 4979 scope.go:117] "RemoveContainer" containerID="319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.601906 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.643280 4979 scope.go:117] "RemoveContainer" containerID="a4eb0f4089220dbf1416ca0ec0eb59b035c4c9519b93e961db07573db46163c3" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.651097 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.662620 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:19:47 crc kubenswrapper[4979]: I0130 23:19:47.083195 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56781d53-1264-465c-bee8-378a284703f7" path="/var/lib/kubelet/pods/56781d53-1264-465c-bee8-378a284703f7/volumes" Jan 30 23:19:50 crc kubenswrapper[4979]: I0130 23:19:50.070545 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:50 crc kubenswrapper[4979]: E0130 23:19:50.071664 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:20:03 crc kubenswrapper[4979]: I0130 23:20:03.077429 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:20:03 crc kubenswrapper[4979]: I0130 23:20:03.815794 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83"} Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.028630 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029495 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029508 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029524 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029530 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029539 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-content" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029545 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-content" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029562 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="init" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029568 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="init" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029586 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-utilities" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029591 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-utilities" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029764 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029775 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.031095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.040584 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.093243 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.093598 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.093725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.195669 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.195739 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.195758 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.196239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.197013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.228841 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.349841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.825147 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.883583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerStarted","Data":"7c9bee82bf003525e7c54bb6adac62d1b43208facb084a3db5f4b3d939ef079f"} Jan 30 23:20:11 crc kubenswrapper[4979]: I0130 23:20:11.893709 4979 generic.go:334] "Generic (PLEG): container finished" podID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerID="85e4313a2cc68a06cc0e1950b147f85c65c4690869acc61ef113f874d32f80b3" exitCode=0 Jan 30 23:20:11 crc kubenswrapper[4979]: I0130 23:20:11.894018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"85e4313a2cc68a06cc0e1950b147f85c65c4690869acc61ef113f874d32f80b3"} Jan 30 23:20:13 crc kubenswrapper[4979]: I0130 23:20:13.914400 4979 generic.go:334] "Generic (PLEG): container finished" podID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerID="f5bc3f510c6052f923f11253c496940f01168e62bba5a86742401d3a8f876b89" exitCode=0 Jan 30 23:20:13 crc kubenswrapper[4979]: I0130 23:20:13.914481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"f5bc3f510c6052f923f11253c496940f01168e62bba5a86742401d3a8f876b89"} Jan 30 23:20:14 crc kubenswrapper[4979]: I0130 23:20:14.926295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerStarted","Data":"96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5"} Jan 30 23:20:14 crc kubenswrapper[4979]: I0130 23:20:14.947185 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xxqqz" podStartSLOduration=2.447664273 podStartE2EDuration="4.947165293s" podCreationTimestamp="2026-01-30 23:20:10 +0000 UTC" firstStartedPulling="2026-01-30 23:20:11.896078461 +0000 UTC m=+6007.857325494" lastFinishedPulling="2026-01-30 23:20:14.395579481 +0000 UTC m=+6010.356826514" observedRunningTime="2026-01-30 23:20:14.941321655 +0000 UTC m=+6010.902568688" watchObservedRunningTime="2026-01-30 23:20:14.947165293 +0000 UTC m=+6010.908412326" Jan 30 23:20:20 crc kubenswrapper[4979]: I0130 23:20:20.350822 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:20 crc kubenswrapper[4979]: I0130 23:20:20.352177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:21 crc kubenswrapper[4979]: I0130 23:20:21.424243 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xxqqz" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" probeResult="failure" output=< Jan 30 23:20:21 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 23:20:21 crc kubenswrapper[4979]: > Jan 30 23:20:23 crc kubenswrapper[4979]: E0130 23:20:23.306230 4979 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.143:40872->38.102.83.143:38353: write tcp 38.102.83.143:40872->38.102.83.143:38353: write: broken pipe Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.683694 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.685928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690433 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690516 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690863 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-zdcgh" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690999 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.709314 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.753906 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.754196 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" containerID="cri-o://82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.754331 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" containerID="cri-o://55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767473 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767535 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767564 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767600 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.796812 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.798723 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.860956 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.869795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871632 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871691 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871714 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871814 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871850 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871874 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871962 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.872905 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.872997 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.873774 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.878632 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.878963 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" containerID="cri-o://b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.879392 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" containerID="cri-o://4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.880791 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.902183 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.973663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974181 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.975144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.976364 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.976501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.979821 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.993842 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.020181 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.119170 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.131390 4979 generic.go:334] "Generic (PLEG): container finished" podID="2eceabd7-12d5-42b8-9add-f89801459249" containerID="82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6" exitCode=143 Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.131477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerDied","Data":"82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6"} Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.137331 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerID="b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545" exitCode=143 Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.137378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerDied","Data":"b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545"} Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.549858 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.586682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.614619 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.617335 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.629869 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:20:28 crc kubenswrapper[4979]: W0130 23:20:28.685566 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97022094_b924_4af4_9725_f91da4c8c957.slice/crio-607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae WatchSource:0}: Error finding container 607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae: Status 404 returned error can't find the container with id 607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.690635 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696646 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696717 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696781 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798765 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798813 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798866 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798897 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.799546 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.799857 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.800491 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.805422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.814256 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.950853 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:29 crc kubenswrapper[4979]: I0130 23:20:29.168313 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerStarted","Data":"17d83bd27fd3e90bc65a85060180708bdecb525bf29dd691bf268c805ec8d75c"} Jan 30 23:20:29 crc kubenswrapper[4979]: I0130 23:20:29.189652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerStarted","Data":"607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae"} Jan 30 23:20:29 crc kubenswrapper[4979]: I0130 23:20:29.414776 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:20:29 crc kubenswrapper[4979]: W0130 23:20:29.433405 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9525bd9a_233e_4207_ac68_26491c2debf7.slice/crio-a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4 WatchSource:0}: Error finding container a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4: Status 404 returned error can't find the container with id a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4 Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.199370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerStarted","Data":"a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4"} Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.411081 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.459418 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.650900 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.243342 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerID="4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232" exitCode=0 Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.243426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerDied","Data":"4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232"} Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.246567 4979 generic.go:334] "Generic (PLEG): container finished" podID="2eceabd7-12d5-42b8-9add-f89801459249" containerID="55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065" exitCode=0 Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.247324 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerDied","Data":"55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065"} Jan 30 23:20:32 crc kubenswrapper[4979]: I0130 23:20:32.253746 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xxqqz" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" containerID="cri-o://96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5" gracePeriod=2 Jan 30 23:20:32 crc kubenswrapper[4979]: E0130 23:20:32.525326 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ff2fbcd_7776_466a_b4fc_9ffefbb5fc83.slice/crio-conmon-96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:20:33 crc kubenswrapper[4979]: I0130 23:20:33.266875 4979 generic.go:334] "Generic (PLEG): container finished" podID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerID="96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5" exitCode=0 Jan 30 23:20:33 crc kubenswrapper[4979]: I0130 23:20:33.267048 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5"} Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.522402 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.712136 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713393 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713480 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713635 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713685 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.714570 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs" (OuterVolumeSpecName: "logs") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.714966 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.716117 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.716135 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.720357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5" (OuterVolumeSpecName: "kube-api-access-zzpd5") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "kube-api-access-zzpd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.720601 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts" (OuterVolumeSpecName: "scripts") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.755419 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph" (OuterVolumeSpecName: "ceph") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.786777 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817871 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817905 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817915 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817923 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.848334 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.885190 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data" (OuterVolumeSpecName: "config-data") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.919832 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.978428 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.020898 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.021030 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.021218 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.023158 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities" (OuterVolumeSpecName: "utilities") pod "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" (UID: "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.029170 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85" (OuterVolumeSpecName: "kube-api-access-6vk85") pod "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" (UID: "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83"). InnerVolumeSpecName "kube-api-access-6vk85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.122842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.122948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123014 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123304 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123339 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123428 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123862 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123880 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.124529 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs" (OuterVolumeSpecName: "logs") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.124614 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.127652 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj" (OuterVolumeSpecName: "kube-api-access-hg4wj") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "kube-api-access-hg4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.134578 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph" (OuterVolumeSpecName: "ceph") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.142449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts" (OuterVolumeSpecName: "scripts") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.173119 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" (UID: "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.219498 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225878 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225908 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225919 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225930 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225938 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225946 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225954 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.244120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data" (OuterVolumeSpecName: "config-data") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.308015 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerStarted","Data":"aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.308279 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerStarted","Data":"1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.310086 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerDied","Data":"028416f98f1589f42948c16d708a30eb83a748a9cb41317ffc40f3e850e93529"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.310135 4979 scope.go:117] "RemoveContainer" containerID="4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.310095 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.312799 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"7c9bee82bf003525e7c54bb6adac62d1b43208facb084a3db5f4b3d939ef079f"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.312871 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.318589 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerStarted","Data":"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.318663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerStarted","Data":"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.320698 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.320691 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerDied","Data":"2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323056 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerStarted","Data":"24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323093 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerStarted","Data":"c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323220 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fcc7dc57-tn5qb" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" containerID="cri-o://c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0" gracePeriod=30 Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323471 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fcc7dc57-tn5qb" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" containerID="cri-o://24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0" gracePeriod=30 Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.328164 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.341343 4979 scope.go:117] "RemoveContainer" containerID="b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.350809 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c6577687c-vbmgt" podStartSLOduration=2.502330244 podStartE2EDuration="10.350788888s" podCreationTimestamp="2026-01-30 23:20:27 +0000 UTC" firstStartedPulling="2026-01-30 23:20:28.687746024 +0000 UTC m=+6024.648993057" lastFinishedPulling="2026-01-30 23:20:36.536204678 +0000 UTC m=+6032.497451701" observedRunningTime="2026-01-30 23:20:37.342183795 +0000 UTC m=+6033.303430838" watchObservedRunningTime="2026-01-30 23:20:37.350788888 +0000 UTC m=+6033.312035911" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.362629 4979 scope.go:117] "RemoveContainer" containerID="96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.379665 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66977458c7-msp58" podStartSLOduration=2.280605363 podStartE2EDuration="9.37963943s" podCreationTimestamp="2026-01-30 23:20:28 +0000 UTC" firstStartedPulling="2026-01-30 23:20:29.437384726 +0000 UTC m=+6025.398631759" lastFinishedPulling="2026-01-30 23:20:36.536418773 +0000 UTC m=+6032.497665826" observedRunningTime="2026-01-30 23:20:37.363227935 +0000 UTC m=+6033.324474968" watchObservedRunningTime="2026-01-30 23:20:37.37963943 +0000 UTC m=+6033.340886463" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.390047 4979 scope.go:117] "RemoveContainer" containerID="f5bc3f510c6052f923f11253c496940f01168e62bba5a86742401d3a8f876b89" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.392911 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.418258 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.420553 4979 scope.go:117] "RemoveContainer" containerID="85e4313a2cc68a06cc0e1950b147f85c65c4690869acc61ef113f874d32f80b3" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.436346 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.455898 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.484217 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.500695 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.501958 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7fcc7dc57-tn5qb" podStartSLOduration=2.494362629 podStartE2EDuration="10.50193551s" podCreationTimestamp="2026-01-30 23:20:27 +0000 UTC" firstStartedPulling="2026-01-30 23:20:28.58673906 +0000 UTC m=+6024.547986093" lastFinishedPulling="2026-01-30 23:20:36.594311941 +0000 UTC m=+6032.555558974" observedRunningTime="2026-01-30 23:20:37.425412838 +0000 UTC m=+6033.386659871" watchObservedRunningTime="2026-01-30 23:20:37.50193551 +0000 UTC m=+6033.463182543" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.511597 4979 scope.go:117] "RemoveContainer" containerID="55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.527611 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528067 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528081 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528097 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528104 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528123 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-content" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528129 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-content" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528144 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528149 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528164 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-utilities" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528170 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-utilities" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528182 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528189 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528206 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528211 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528385 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528395 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528418 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528428 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528440 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.529600 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.535564 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.535734 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.535891 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92r4q" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.536210 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.538246 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.539938 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.544797 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.553943 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.574258 4979 scope.go:117] "RemoveContainer" containerID="82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633610 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-logs\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-config-data\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prb2q\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-kube-api-access-prb2q\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633791 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633890 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633917 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633951 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633967 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634255 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czffz\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-kube-api-access-czffz\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634323 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-ceph\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634341 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-scripts\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634392 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736989 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737034 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737165 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737225 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czffz\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-kube-api-access-czffz\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737245 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-ceph\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737262 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-scripts\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737278 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-logs\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737304 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737315 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-config-data\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737513 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prb2q\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-kube-api-access-prb2q\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.738167 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-logs\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.742185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.743635 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.743929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.745302 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-ceph\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.747828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-scripts\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.750013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.753793 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.755861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-config-data\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.757464 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czffz\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-kube-api-access-czffz\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.758319 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prb2q\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-kube-api-access-prb2q\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.865125 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.879337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.021906 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.121074 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.121123 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.516174 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.603149 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.953336 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.953394 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.090087 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" path="/var/lib/kubelet/pods/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83/volumes" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.091133 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eceabd7-12d5-42b8-9add-f89801459249" path="/var/lib/kubelet/pods/2eceabd7-12d5-42b8-9add-f89801459249/volumes" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.091772 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" path="/var/lib/kubelet/pods/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a/volumes" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.351643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67c81730-0360-4ee7-a657-774bab3e5ce1","Type":"ContainerStarted","Data":"4bf4a91a7c05a8cba94f918f727a124c8ff9328e1632948df22dbd481e5db031"} Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.352962 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a","Type":"ContainerStarted","Data":"817c3a28a213997b0532f8b96aa7caffd9f1684b511c7c605774368c55b9160f"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.364995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67c81730-0360-4ee7-a657-774bab3e5ce1","Type":"ContainerStarted","Data":"a109a6f4211a28840305607d7f68018d9fbd8da26ce2ec45158c294ff9280342"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.366665 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67c81730-0360-4ee7-a657-774bab3e5ce1","Type":"ContainerStarted","Data":"7ffab0db6c226419028deb4e521d3b1dc32f71dcc4771ad0a23a0221dcec5607"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.371972 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a","Type":"ContainerStarted","Data":"868aaaac2b29e60ec894a1a8d01e6f4d984a8b589ebcbfff89491e83274e523a"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.372030 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a","Type":"ContainerStarted","Data":"d0bcf15098c54793173bc634d1d439bc3075d0c0b45bdf3cfd75b3e26f7400a3"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.398163 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.398134648 podStartE2EDuration="3.398134648s" podCreationTimestamp="2026-01-30 23:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:20:40.389236587 +0000 UTC m=+6036.350483620" watchObservedRunningTime="2026-01-30 23:20:40.398134648 +0000 UTC m=+6036.359381691" Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.431433 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.431407749 podStartE2EDuration="3.431407749s" podCreationTimestamp="2026-01-30 23:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:20:40.425460538 +0000 UTC m=+6036.386707581" watchObservedRunningTime="2026-01-30 23:20:40.431407749 +0000 UTC m=+6036.392654802" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.060275 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.079424 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.085578 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.094748 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.866215 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.866263 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.881101 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.882800 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.920988 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.930978 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.937895 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.944131 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.123629 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462434 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462481 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462495 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462508 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.954257 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:20:49 crc kubenswrapper[4979]: I0130 23:20:49.080829 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" path="/var/lib/kubelet/pods/800775b4-f78f-4f2f-9d21-4dd42458db2b/volumes" Jan 30 23:20:49 crc kubenswrapper[4979]: I0130 23:20:49.081651 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" path="/var/lib/kubelet/pods/c5bf2d6f-952e-4cec-938b-e1d00042c3ad/volumes" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.421327 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.423252 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.431635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.476631 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:53 crc kubenswrapper[4979]: I0130 23:20:53.060099 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:20:53 crc kubenswrapper[4979]: I0130 23:20:53.103343 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:20:55 crc kubenswrapper[4979]: I0130 23:20:55.084438 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338244cb-adb6-4402-ba74-378f70078ebd" path="/var/lib/kubelet/pods/338244cb-adb6-4402-ba74-378f70078ebd/volumes" Jan 30 23:20:59 crc kubenswrapper[4979]: I0130 23:20:59.754312 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:00 crc kubenswrapper[4979]: I0130 23:21:00.639695 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:21:01 crc kubenswrapper[4979]: I0130 23:21:01.392337 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.478602 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.551740 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.552279 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" containerID="cri-o://1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819" gracePeriod=30 Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.552432 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" containerID="cri-o://aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e" gracePeriod=30 Jan 30 23:21:06 crc kubenswrapper[4979]: I0130 23:21:06.649397 4979 generic.go:334] "Generic (PLEG): container finished" podID="97022094-b924-4af4-9725-f91da4c8c957" containerID="aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e" exitCode=0 Jan 30 23:21:06 crc kubenswrapper[4979]: I0130 23:21:06.649555 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerDied","Data":"aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e"} Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669722 4979 generic.go:334] "Generic (PLEG): container finished" podID="9d2ab497-2486-4439-ae5b-c2284f870680" containerID="24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0" exitCode=137 Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669762 4979 generic.go:334] "Generic (PLEG): container finished" podID="9d2ab497-2486-4439-ae5b-c2284f870680" containerID="c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0" exitCode=137 Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669787 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerDied","Data":"24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0"} Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669820 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerDied","Data":"c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0"} Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.825948 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.880776 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881251 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881638 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881865 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs" (OuterVolumeSpecName: "logs") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.882446 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.890879 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.894284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz" (OuterVolumeSpecName: "kube-api-access-2bxbz") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "kube-api-access-2bxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.911719 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data" (OuterVolumeSpecName: "config-data") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.915604 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts" (OuterVolumeSpecName: "scripts") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984776 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984813 4979 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984824 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984832 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.121550 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.686197 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerDied","Data":"17d83bd27fd3e90bc65a85060180708bdecb525bf29dd691bf268c805ec8d75c"} Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.686265 4979 scope.go:117] "RemoveContainer" containerID="24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.686438 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.739419 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.757158 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.910881 4979 scope.go:117] "RemoveContainer" containerID="c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0" Jan 30 23:21:09 crc kubenswrapper[4979]: I0130 23:21:09.083617 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" path="/var/lib/kubelet/pods/9d2ab497-2486-4439-ae5b-c2284f870680/volumes" Jan 30 23:21:18 crc kubenswrapper[4979]: I0130 23:21:18.120834 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.032230 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.045327 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.055528 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.065603 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.081489 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" path="/var/lib/kubelet/pods/b871a72e-a648-4c40-b5eb-604c75307e21/volumes" Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.082464 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" path="/var/lib/kubelet/pods/b92c4a95-be2f-4c0d-a789-f7505dcdfd97/volumes" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.121131 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.122798 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.419654 4979 scope.go:117] "RemoveContainer" containerID="679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.456503 4979 scope.go:117] "RemoveContainer" containerID="7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.498747 4979 scope.go:117] "RemoveContainer" containerID="46d964a0839cd8efea2510cfac9bc323533200f0741e6142ba6a532c576e85b4" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.527515 4979 scope.go:117] "RemoveContainer" containerID="6488fa6f75b07a884cb1c9e243ae1419c47f4f31507eee906fc6e83084e37e42" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.592107 4979 scope.go:117] "RemoveContainer" containerID="088e2e7d854d7dc05cd4dbe8fe4c7ffcbdee731d873f6f602ab10d8c9fb6c170" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.649568 4979 scope.go:117] "RemoveContainer" containerID="6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.693741 4979 scope.go:117] "RemoveContainer" containerID="7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5" Jan 30 23:21:29 crc kubenswrapper[4979]: I0130 23:21:29.083000 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:21:29 crc kubenswrapper[4979]: I0130 23:21:29.088809 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.164471 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:30 crc kubenswrapper[4979]: E0130 23:21:30.165230 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165245 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" Jan 30 23:21:30 crc kubenswrapper[4979]: E0130 23:21:30.165261 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165268 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165478 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165508 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.167181 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.185076 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.287937 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.288250 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.288275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.389935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.390012 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.390190 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.391263 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.391519 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.420785 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.499767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.038959 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.088491 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" path="/var/lib/kubelet/pods/fc87a0f7-9b2b-46ce-a000-c1c5195535d8/volumes" Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.950818 4979 generic.go:334] "Generic (PLEG): container finished" podID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" exitCode=0 Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.951416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d"} Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.951513 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerStarted","Data":"3a83afe9c90f101f2c59a963efc80878bb5b9d1c7e7ba87227c5ecf0489009f1"} Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.954519 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:21:32 crc kubenswrapper[4979]: E0130 23:21:32.807356 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97022094_b924_4af4_9725_f91da4c8c957.slice/crio-conmon-1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:21:32 crc kubenswrapper[4979]: I0130 23:21:32.966382 4979 generic.go:334] "Generic (PLEG): container finished" podID="97022094-b924-4af4-9725-f91da4c8c957" containerID="1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819" exitCode=137 Jan 30 23:21:32 crc kubenswrapper[4979]: I0130 23:21:32.967002 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerDied","Data":"1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819"} Jan 30 23:21:32 crc kubenswrapper[4979]: I0130 23:21:32.969423 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerStarted","Data":"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997"} Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.540758 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.680386 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.681192 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.681715 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.682238 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.682343 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.682434 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs" (OuterVolumeSpecName: "logs") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.683695 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.688203 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n" (OuterVolumeSpecName: "kube-api-access-zdt2n") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "kube-api-access-zdt2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.688653 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.711897 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data" (OuterVolumeSpecName: "config-data") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.741215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts" (OuterVolumeSpecName: "scripts") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785534 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785588 4979 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785614 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785634 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.985508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerDied","Data":"607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae"} Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.985567 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.985580 4979 scope.go:117] "RemoveContainer" containerID="aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.988548 4979 generic.go:334] "Generic (PLEG): container finished" podID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" exitCode=0 Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.988592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997"} Jan 30 23:21:34 crc kubenswrapper[4979]: I0130 23:21:34.052241 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:21:34 crc kubenswrapper[4979]: I0130 23:21:34.060629 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:21:34 crc kubenswrapper[4979]: I0130 23:21:34.212297 4979 scope.go:117] "RemoveContainer" containerID="1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819" Jan 30 23:21:35 crc kubenswrapper[4979]: I0130 23:21:35.006523 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerStarted","Data":"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3"} Jan 30 23:21:35 crc kubenswrapper[4979]: I0130 23:21:35.052747 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ncmfr" podStartSLOduration=2.524742951 podStartE2EDuration="5.052719873s" podCreationTimestamp="2026-01-30 23:21:30 +0000 UTC" firstStartedPulling="2026-01-30 23:21:31.954262239 +0000 UTC m=+6087.915509272" lastFinishedPulling="2026-01-30 23:21:34.482239131 +0000 UTC m=+6090.443486194" observedRunningTime="2026-01-30 23:21:35.034754636 +0000 UTC m=+6090.996001709" watchObservedRunningTime="2026-01-30 23:21:35.052719873 +0000 UTC m=+6091.013966906" Jan 30 23:21:35 crc kubenswrapper[4979]: I0130 23:21:35.086527 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97022094-b924-4af4-9725-f91da4c8c957" path="/var/lib/kubelet/pods/97022094-b924-4af4-9725-f91da4c8c957/volumes" Jan 30 23:21:40 crc kubenswrapper[4979]: I0130 23:21:40.499886 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:40 crc kubenswrapper[4979]: I0130 23:21:40.500428 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:40 crc kubenswrapper[4979]: I0130 23:21:40.546073 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:41 crc kubenswrapper[4979]: I0130 23:21:41.124874 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:41 crc kubenswrapper[4979]: I0130 23:21:41.186312 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.084369 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ncmfr" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" containerID="cri-o://09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" gracePeriod=2 Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.545940 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.548716 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.548805 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.548968 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.550541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities" (OuterVolumeSpecName: "utilities") pod "db7b49f6-c61a-4db0-a0cd-1f91923bb781" (UID: "db7b49f6-c61a-4db0-a0cd-1f91923bb781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.556304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52" (OuterVolumeSpecName: "kube-api-access-k9v52") pod "db7b49f6-c61a-4db0-a0cd-1f91923bb781" (UID: "db7b49f6-c61a-4db0-a0cd-1f91923bb781"). InnerVolumeSpecName "kube-api-access-k9v52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.597018 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db7b49f6-c61a-4db0-a0cd-1f91923bb781" (UID: "db7b49f6-c61a-4db0-a0cd-1f91923bb781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.650772 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.650803 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.650816 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096115 4979 generic.go:334] "Generic (PLEG): container finished" podID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" exitCode=0 Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096178 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3"} Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096207 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"3a83afe9c90f101f2c59a963efc80878bb5b9d1c7e7ba87227c5ecf0489009f1"} Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096230 4979 scope.go:117] "RemoveContainer" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096253 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.125417 4979 scope.go:117] "RemoveContainer" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.154566 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.167988 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.179730 4979 scope.go:117] "RemoveContainer" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.211245 4979 scope.go:117] "RemoveContainer" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.211758 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3\": container with ID starting with 09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3 not found: ID does not exist" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.211879 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3"} err="failed to get container status \"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3\": rpc error: code = NotFound desc = could not find container \"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3\": container with ID starting with 09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3 not found: ID does not exist" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.211962 4979 scope.go:117] "RemoveContainer" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.212492 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997\": container with ID starting with 5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997 not found: ID does not exist" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.212528 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997"} err="failed to get container status \"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997\": rpc error: code = NotFound desc = could not find container \"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997\": container with ID starting with 5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997 not found: ID does not exist" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.212547 4979 scope.go:117] "RemoveContainer" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.212851 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d\": container with ID starting with b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d not found: ID does not exist" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.212905 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d"} err="failed to get container status \"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d\": rpc error: code = NotFound desc = could not find container \"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d\": container with ID starting with b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d not found: ID does not exist" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.871273 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9b688f5c-2xlsg"] Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.871826 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-content" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.871857 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-content" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872092 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-utilities" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872103 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-utilities" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872123 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872132 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872153 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872197 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872206 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872461 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872484 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872499 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.873891 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.887575 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b688f5c-2xlsg"] Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.977894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d199303b-d615-40f9-a420-bfde359d8392-horizon-secret-key\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.977941 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d199303b-d615-40f9-a420-bfde359d8392-logs\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.977988 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-scripts\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.978020 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22wbq\" (UniqueName: \"kubernetes.io/projected/d199303b-d615-40f9-a420-bfde359d8392-kube-api-access-22wbq\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.978159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-config-data\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.079502 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" path="/var/lib/kubelet/pods/db7b49f6-c61a-4db0-a0cd-1f91923bb781/volumes" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.079915 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-config-data\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080145 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d199303b-d615-40f9-a420-bfde359d8392-horizon-secret-key\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080232 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d199303b-d615-40f9-a420-bfde359d8392-logs\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080313 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-scripts\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22wbq\" (UniqueName: \"kubernetes.io/projected/d199303b-d615-40f9-a420-bfde359d8392-kube-api-access-22wbq\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.081149 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d199303b-d615-40f9-a420-bfde359d8392-logs\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.081168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-config-data\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.081560 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-scripts\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.090682 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d199303b-d615-40f9-a420-bfde359d8392-horizon-secret-key\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.100212 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22wbq\" (UniqueName: \"kubernetes.io/projected/d199303b-d615-40f9-a420-bfde359d8392-kube-api-access-22wbq\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.192346 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.643539 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b688f5c-2xlsg"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.115597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b688f5c-2xlsg" event={"ID":"d199303b-d615-40f9-a420-bfde359d8392","Type":"ContainerStarted","Data":"1f6e61f45ee2c20b9d250631abd7d5b77981ced17dc48e9e83853de337cfd740"} Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.115963 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b688f5c-2xlsg" event={"ID":"d199303b-d615-40f9-a420-bfde359d8392","Type":"ContainerStarted","Data":"f7b9808af55c261ce3a86f8e914ffe1eda713e5129a0c40feee8fa197969f4e6"} Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.115979 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b688f5c-2xlsg" event={"ID":"d199303b-d615-40f9-a420-bfde359d8392","Type":"ContainerStarted","Data":"e76dbefd9375959ed08c6ef182fa3b9ba46bb6e9ca794b8b13ca8ec8394d76a8"} Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.149483 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9b688f5c-2xlsg" podStartSLOduration=2.149459837 podStartE2EDuration="2.149459837s" podCreationTimestamp="2026-01-30 23:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:21:46.147476223 +0000 UTC m=+6102.108723256" watchObservedRunningTime="2026-01-30 23:21:46.149459837 +0000 UTC m=+6102.110706870" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.243422 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.245205 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.257138 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.407595 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.407850 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.447433 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.449087 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.455857 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.482785 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.509935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.510075 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.510917 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.529691 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.580481 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.611811 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.612482 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.714001 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.714624 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.715795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.734652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.794095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:47 crc kubenswrapper[4979]: I0130 23:21:47.064576 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:21:47 crc kubenswrapper[4979]: I0130 23:21:47.189055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-vjhff" event={"ID":"e764deeb-609a-4c01-8e75-729988b54849","Type":"ContainerStarted","Data":"60d1ddd9bb063feb469224e320c87fd0ea28242283125ecdcd769faaab57aa0b"} Jan 30 23:21:47 crc kubenswrapper[4979]: W0130 23:21:47.276254 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4b69e9_3082_4eac_a4c9_2fd308ed75bd.slice/crio-96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb WatchSource:0}: Error finding container 96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb: Status 404 returned error can't find the container with id 96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb Jan 30 23:21:47 crc kubenswrapper[4979]: I0130 23:21:47.280819 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.213390 4979 generic.go:334] "Generic (PLEG): container finished" podID="e764deeb-609a-4c01-8e75-729988b54849" containerID="4bf379d2ade37e9d1e0a22eab217d802e3a8854982275953c74bf158307b26eb" exitCode=0 Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.213457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-vjhff" event={"ID":"e764deeb-609a-4c01-8e75-729988b54849","Type":"ContainerDied","Data":"4bf379d2ade37e9d1e0a22eab217d802e3a8854982275953c74bf158307b26eb"} Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.222525 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerID="58163cfecf1e6d2fed441241595c3b510d0c4b0a9adfab7fece442a3238e97f0" exitCode=0 Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.222566 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-90f9-account-create-update-f758c" event={"ID":"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd","Type":"ContainerDied","Data":"58163cfecf1e6d2fed441241595c3b510d0c4b0a9adfab7fece442a3238e97f0"} Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.222596 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-90f9-account-create-update-f758c" event={"ID":"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd","Type":"ContainerStarted","Data":"96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb"} Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.747041 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.752315 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890220 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890292 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"e764deeb-609a-4c01-8e75-729988b54849\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890344 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"e764deeb-609a-4c01-8e75-729988b54849\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890862 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" (UID: "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890939 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e764deeb-609a-4c01-8e75-729988b54849" (UID: "e764deeb-609a-4c01-8e75-729988b54849"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.895656 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27" (OuterVolumeSpecName: "kube-api-access-bjw27") pod "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" (UID: "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd"). InnerVolumeSpecName "kube-api-access-bjw27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.896472 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh" (OuterVolumeSpecName: "kube-api-access-wsjkh") pod "e764deeb-609a-4c01-8e75-729988b54849" (UID: "e764deeb-609a-4c01-8e75-729988b54849"). InnerVolumeSpecName "kube-api-access-wsjkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993238 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993292 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993319 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993336 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.240870 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-vjhff" event={"ID":"e764deeb-609a-4c01-8e75-729988b54849","Type":"ContainerDied","Data":"60d1ddd9bb063feb469224e320c87fd0ea28242283125ecdcd769faaab57aa0b"} Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.240896 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.240912 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d1ddd9bb063feb469224e320c87fd0ea28242283125ecdcd769faaab57aa0b" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.242327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-90f9-account-create-update-f758c" event={"ID":"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd","Type":"ContainerDied","Data":"96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb"} Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.242423 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.242397 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.516308 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:21:51 crc kubenswrapper[4979]: E0130 23:21:51.517223 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerName="mariadb-account-create-update" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517246 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerName="mariadb-account-create-update" Jan 30 23:21:51 crc kubenswrapper[4979]: E0130 23:21:51.517291 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e764deeb-609a-4c01-8e75-729988b54849" containerName="mariadb-database-create" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517304 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e764deeb-609a-4c01-8e75-729988b54849" containerName="mariadb-database-create" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517666 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerName="mariadb-account-create-update" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517702 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e764deeb-609a-4c01-8e75-729988b54849" containerName="mariadb-database-create" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.518717 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.522162 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4r7vf" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.522840 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.532673 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.627249 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.627296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.627323 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.730305 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.730781 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.730918 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.737009 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.744091 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.747160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.915831 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:52 crc kubenswrapper[4979]: I0130 23:21:52.388161 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:21:52 crc kubenswrapper[4979]: W0130 23:21:52.391375 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e6a3c61_50ef_48b5_bcc0_ab3374693979.slice/crio-856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a WatchSource:0}: Error finding container 856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a: Status 404 returned error can't find the container with id 856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a Jan 30 23:21:53 crc kubenswrapper[4979]: I0130 23:21:53.280362 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerStarted","Data":"856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a"} Jan 30 23:21:55 crc kubenswrapper[4979]: I0130 23:21:55.192743 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:55 crc kubenswrapper[4979]: I0130 23:21:55.193215 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:55 crc kubenswrapper[4979]: I0130 23:21:55.195112 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9b688f5c-2xlsg" podUID="d199303b-d615-40f9-a420-bfde359d8392" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.120:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8080: connect: connection refused" Jan 30 23:21:59 crc kubenswrapper[4979]: I0130 23:21:59.370861 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerStarted","Data":"c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e"} Jan 30 23:21:59 crc kubenswrapper[4979]: I0130 23:21:59.414167 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-lhhst" podStartSLOduration=1.970073687 podStartE2EDuration="8.414139736s" podCreationTimestamp="2026-01-30 23:21:51 +0000 UTC" firstStartedPulling="2026-01-30 23:21:52.3946203 +0000 UTC m=+6108.355867333" lastFinishedPulling="2026-01-30 23:21:58.838686339 +0000 UTC m=+6114.799933382" observedRunningTime="2026-01-30 23:21:59.398005709 +0000 UTC m=+6115.359252782" watchObservedRunningTime="2026-01-30 23:21:59.414139736 +0000 UTC m=+6115.375386799" Jan 30 23:22:01 crc kubenswrapper[4979]: I0130 23:22:01.392369 4979 generic.go:334] "Generic (PLEG): container finished" podID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerID="c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e" exitCode=0 Jan 30 23:22:01 crc kubenswrapper[4979]: I0130 23:22:01.392460 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerDied","Data":"c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e"} Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.846126 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.884988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.885644 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.885920 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.896060 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r" (OuterVolumeSpecName: "kube-api-access-nbk5r") pod "4e6a3c61-50ef-48b5-bcc0-ab3374693979" (UID: "4e6a3c61-50ef-48b5-bcc0-ab3374693979"). InnerVolumeSpecName "kube-api-access-nbk5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.917428 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e6a3c61-50ef-48b5-bcc0-ab3374693979" (UID: "4e6a3c61-50ef-48b5-bcc0-ab3374693979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.964748 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data" (OuterVolumeSpecName: "config-data") pod "4e6a3c61-50ef-48b5-bcc0-ab3374693979" (UID: "4e6a3c61-50ef-48b5-bcc0-ab3374693979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.988858 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.988884 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.988895 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:03 crc kubenswrapper[4979]: I0130 23:22:03.416294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerDied","Data":"856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a"} Jan 30 23:22:03 crc kubenswrapper[4979]: I0130 23:22:03.416687 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a" Jan 30 23:22:03 crc kubenswrapper[4979]: I0130 23:22:03.416747 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.541084 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5998d4684d-smdfx"] Jan 30 23:22:04 crc kubenswrapper[4979]: E0130 23:22:04.541956 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerName="heat-db-sync" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.541971 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerName="heat-db-sync" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.542185 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerName="heat-db-sync" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.542985 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.549251 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.549763 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4r7vf" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.556337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.578115 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5998d4684d-smdfx"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.633911 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7bf56f7748-njbm7"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.635501 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.637814 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651503 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqqm\" (UniqueName: \"kubernetes.io/projected/b2612383-27a6-4663-b45a-0aac825bf021-kube-api-access-8hqqm\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651605 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data-custom\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651630 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651723 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-combined-ca-bundle\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.656064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7bf56f7748-njbm7"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.693921 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-d7d58dff5-tjkx9"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.695218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.701538 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.724197 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d7d58dff5-tjkx9"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data-custom\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753576 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpq5q\" (UniqueName: \"kubernetes.io/projected/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-kube-api-access-gpq5q\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753639 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-combined-ca-bundle\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753684 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqqm\" (UniqueName: \"kubernetes.io/projected/b2612383-27a6-4663-b45a-0aac825bf021-kube-api-access-8hqqm\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753710 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-combined-ca-bundle\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753739 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-combined-ca-bundle\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753765 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753784 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfv82\" (UniqueName: \"kubernetes.io/projected/d72b8dbc-f35e-4aea-ab91-75be38745fd1-kube-api-access-dfv82\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753862 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data-custom\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753887 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753921 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data-custom\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.762891 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-combined-ca-bundle\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.764456 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.766013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data-custom\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.776780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqqm\" (UniqueName: \"kubernetes.io/projected/b2612383-27a6-4663-b45a-0aac825bf021-kube-api-access-8hqqm\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855654 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfv82\" (UniqueName: \"kubernetes.io/projected/d72b8dbc-f35e-4aea-ab91-75be38745fd1-kube-api-access-dfv82\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855750 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data-custom\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855798 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data-custom\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpq5q\" (UniqueName: \"kubernetes.io/projected/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-kube-api-access-gpq5q\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855938 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-combined-ca-bundle\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855967 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-combined-ca-bundle\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.857069 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.857103 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.861506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data-custom\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.862248 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data-custom\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.863416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.870012 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-combined-ca-bundle\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.883178 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.885819 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-combined-ca-bundle\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.886693 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfv82\" (UniqueName: \"kubernetes.io/projected/d72b8dbc-f35e-4aea-ab91-75be38745fd1-kube-api-access-dfv82\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.889080 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpq5q\" (UniqueName: \"kubernetes.io/projected/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-kube-api-access-gpq5q\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.890749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.952727 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.020945 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.372625 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5998d4684d-smdfx"] Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.433282 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5998d4684d-smdfx" event={"ID":"b2612383-27a6-4663-b45a-0aac825bf021","Type":"ContainerStarted","Data":"fe737396b69b4aed83691d8dc360b772469f6a98ced0405fc07ff76df54edf3b"} Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.501895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7bf56f7748-njbm7"] Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.603545 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d7d58dff5-tjkx9"] Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.446220 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5998d4684d-smdfx" event={"ID":"b2612383-27a6-4663-b45a-0aac825bf021","Type":"ContainerStarted","Data":"37a22131ae6f3bb74679c57ec9465a1a9b90bc31493ee4bd6dd8bdfd06af1633"} Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.446756 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.451017 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7d58dff5-tjkx9" event={"ID":"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9","Type":"ContainerStarted","Data":"29f08cf5d30ce96408e1e27a14ec3e1f056ea248f2705b3c4ee30fc38ccdc476"} Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.452664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" event={"ID":"d72b8dbc-f35e-4aea-ab91-75be38745fd1","Type":"ContainerStarted","Data":"4fa148c89cef593af9ed94070999f72545c6fc344e3f54badefcc61b96b1c9e4"} Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.473548 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5998d4684d-smdfx" podStartSLOduration=2.473530759 podStartE2EDuration="2.473530759s" podCreationTimestamp="2026-01-30 23:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:22:06.467684731 +0000 UTC m=+6122.428931764" watchObservedRunningTime="2026-01-30 23:22:06.473530759 +0000 UTC m=+6122.434777792" Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.903485 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.477768 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" event={"ID":"d72b8dbc-f35e-4aea-ab91-75be38745fd1","Type":"ContainerStarted","Data":"9514a5344c062ab05c3963f7f5cd2b99667597051b62ff1e6ab7871bac553473"} Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.478312 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.480702 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7d58dff5-tjkx9" event={"ID":"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9","Type":"ContainerStarted","Data":"571a61434f246168d8a2aa5e4f9ede97bfdb87a29dfa451f8a0e1145fbcfecaf"} Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.480876 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.493868 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" podStartSLOduration=2.455823351 podStartE2EDuration="4.493852509s" podCreationTimestamp="2026-01-30 23:22:04 +0000 UTC" firstStartedPulling="2026-01-30 23:22:05.511964661 +0000 UTC m=+6121.473211694" lastFinishedPulling="2026-01-30 23:22:07.549993809 +0000 UTC m=+6123.511240852" observedRunningTime="2026-01-30 23:22:08.492185244 +0000 UTC m=+6124.453432277" watchObservedRunningTime="2026-01-30 23:22:08.493852509 +0000 UTC m=+6124.455099542" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.517046 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-d7d58dff5-tjkx9" podStartSLOduration=2.57397388 podStartE2EDuration="4.517002766s" podCreationTimestamp="2026-01-30 23:22:04 +0000 UTC" firstStartedPulling="2026-01-30 23:22:05.612697878 +0000 UTC m=+6121.573944911" lastFinishedPulling="2026-01-30 23:22:07.555726764 +0000 UTC m=+6123.516973797" observedRunningTime="2026-01-30 23:22:08.515165546 +0000 UTC m=+6124.476412599" watchObservedRunningTime="2026-01-30 23:22:08.517002766 +0000 UTC m=+6124.478249819" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.800224 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.888268 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.888505 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" containerID="cri-o://39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" gracePeriod=30 Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.888576 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" containerID="cri-o://f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" gracePeriod=30 Jan 30 23:22:10 crc kubenswrapper[4979]: I0130 23:22:10.050741 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:22:10 crc kubenswrapper[4979]: I0130 23:22:10.059389 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.044254 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.051188 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.083565 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" path="/var/lib/kubelet/pods/59dad3f6-f4ce-4ce7-8364-044694d448f1/volumes" Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.086101 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" path="/var/lib/kubelet/pods/90346f0c-7cc3-4f3c-a29f-9b7265eff703/volumes" Jan 30 23:22:12 crc kubenswrapper[4979]: I0130 23:22:12.054393 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:57726->10.217.1.116:8080: read: connection reset by peer" Jan 30 23:22:12 crc kubenswrapper[4979]: I0130 23:22:12.514836 4979 generic.go:334] "Generic (PLEG): container finished" podID="9525bd9a-233e-4207-ac68-26491c2debf7" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" exitCode=0 Jan 30 23:22:12 crc kubenswrapper[4979]: I0130 23:22:12.514883 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerDied","Data":"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd"} Jan 30 23:22:16 crc kubenswrapper[4979]: I0130 23:22:16.383454 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:16 crc kubenswrapper[4979]: I0130 23:22:16.711004 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:18 crc kubenswrapper[4979]: I0130 23:22:18.952391 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:22:19 crc kubenswrapper[4979]: I0130 23:22:19.045285 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:22:19 crc kubenswrapper[4979]: I0130 23:22:19.059231 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:22:19 crc kubenswrapper[4979]: I0130 23:22:19.081994 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" path="/var/lib/kubelet/pods/531879a6-b909-4e84-bb7d-9d4e94c5e7f4/volumes" Jan 30 23:22:24 crc kubenswrapper[4979]: I0130 23:22:24.925910 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:28 crc kubenswrapper[4979]: I0130 23:22:28.952126 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:22:28 crc kubenswrapper[4979]: I0130 23:22:28.952869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:22:28 crc kubenswrapper[4979]: I0130 23:22:28.964523 4979 scope.go:117] "RemoveContainer" containerID="11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.024223 4979 scope.go:117] "RemoveContainer" containerID="f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.055891 4979 scope.go:117] "RemoveContainer" containerID="75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.106095 4979 scope.go:117] "RemoveContainer" containerID="22f97911fc2dfbe2d7800553503f0c8338bac7f33443e8a617f5b406e5bdc412" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.143603 4979 scope.go:117] "RemoveContainer" containerID="1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.199197 4979 scope.go:117] "RemoveContainer" containerID="6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7" Jan 30 23:22:32 crc kubenswrapper[4979]: I0130 23:22:32.040273 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:22:32 crc kubenswrapper[4979]: I0130 23:22:32.040764 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.944925 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26"] Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.948813 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.951182 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.956593 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26"] Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.021294 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.021403 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.021466 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.123404 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.123500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.123553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.124020 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.124233 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.142807 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.270374 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.774187 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26"] Jan 30 23:22:35 crc kubenswrapper[4979]: I0130 23:22:35.765951 4979 generic.go:334] "Generic (PLEG): container finished" podID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerID="c67e827feac05e26c7e512c2abe1c7b831a97b5c4a93c7b4bffabaaf66afa66f" exitCode=0 Jan 30 23:22:35 crc kubenswrapper[4979]: I0130 23:22:35.766396 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"c67e827feac05e26c7e512c2abe1c7b831a97b5c4a93c7b4bffabaaf66afa66f"} Jan 30 23:22:35 crc kubenswrapper[4979]: I0130 23:22:35.766437 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerStarted","Data":"85d37a1e47cc6b63a841c62202148a7a9534288c203c8f8849daaae19dd83a95"} Jan 30 23:22:37 crc kubenswrapper[4979]: I0130 23:22:37.802720 4979 generic.go:334] "Generic (PLEG): container finished" podID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerID="384d532c09aa7b786695a05ab4667146ac46516af403e628f281d37a11af7e0a" exitCode=0 Jan 30 23:22:37 crc kubenswrapper[4979]: I0130 23:22:37.802780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"384d532c09aa7b786695a05ab4667146ac46516af403e628f281d37a11af7e0a"} Jan 30 23:22:38 crc kubenswrapper[4979]: I0130 23:22:38.815965 4979 generic.go:334] "Generic (PLEG): container finished" podID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerID="c150996f315cfa3b1078f2bd0f329308dc6a001e61c90189f13d6f91d139ae44" exitCode=0 Jan 30 23:22:38 crc kubenswrapper[4979]: I0130 23:22:38.816124 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"c150996f315cfa3b1078f2bd0f329308dc6a001e61c90189f13d6f91d139ae44"} Jan 30 23:22:38 crc kubenswrapper[4979]: I0130 23:22:38.953832 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.370838 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.458728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459089 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459137 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459252 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459335 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459242 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs" (OuterVolumeSpecName: "logs") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459936 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.464328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.466167 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6" (OuterVolumeSpecName: "kube-api-access-g48z6") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "kube-api-access-g48z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.486624 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data" (OuterVolumeSpecName: "config-data") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.486820 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts" (OuterVolumeSpecName: "scripts") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562018 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562073 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562084 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562095 4979 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826101 4979 generic.go:334] "Generic (PLEG): container finished" podID="9525bd9a-233e-4207-ac68-26491c2debf7" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" exitCode=137 Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826145 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerDied","Data":"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83"} Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826177 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826197 4979 scope.go:117] "RemoveContainer" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826186 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerDied","Data":"a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4"} Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.869426 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.878870 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.016476 4979 scope.go:117] "RemoveContainer" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.034998 4979 scope.go:117] "RemoveContainer" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" Jan 30 23:22:40 crc kubenswrapper[4979]: E0130 23:22:40.035529 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd\": container with ID starting with f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd not found: ID does not exist" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.035573 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd"} err="failed to get container status \"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd\": rpc error: code = NotFound desc = could not find container \"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd\": container with ID starting with f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd not found: ID does not exist" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.035603 4979 scope.go:117] "RemoveContainer" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" Jan 30 23:22:40 crc kubenswrapper[4979]: E0130 23:22:40.035839 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83\": container with ID starting with 39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83 not found: ID does not exist" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.035894 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83"} err="failed to get container status \"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83\": rpc error: code = NotFound desc = could not find container \"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83\": container with ID starting with 39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83 not found: ID does not exist" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.158342 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.279591 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.280106 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.280292 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.283459 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle" (OuterVolumeSpecName: "bundle") pod "a4719f7f-2493-47b2-bd3d-3d2edecf2e00" (UID: "a4719f7f-2493-47b2-bd3d-3d2edecf2e00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.284447 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8" (OuterVolumeSpecName: "kube-api-access-xbnx8") pod "a4719f7f-2493-47b2-bd3d-3d2edecf2e00" (UID: "a4719f7f-2493-47b2-bd3d-3d2edecf2e00"). InnerVolumeSpecName "kube-api-access-xbnx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.383591 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.383643 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.423373 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util" (OuterVolumeSpecName: "util") pod "a4719f7f-2493-47b2-bd3d-3d2edecf2e00" (UID: "a4719f7f-2493-47b2-bd3d-3d2edecf2e00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.485719 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.853152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"85d37a1e47cc6b63a841c62202148a7a9534288c203c8f8849daaae19dd83a95"} Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.853233 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d37a1e47cc6b63a841c62202148a7a9534288c203c8f8849daaae19dd83a95" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.854070 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:41 crc kubenswrapper[4979]: I0130 23:22:41.084158 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" path="/var/lib/kubelet/pods/9525bd9a-233e-4207-ac68-26491c2debf7/volumes" Jan 30 23:22:48 crc kubenswrapper[4979]: I0130 23:22:48.092433 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:22:48 crc kubenswrapper[4979]: I0130 23:22:48.100634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.048281 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.066125 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.081413 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" path="/var/lib/kubelet/pods/26b37754-6d06-4d68-bf4b-34b553d5750e/volumes" Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.082184 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" path="/var/lib/kubelet/pods/d7ef2a65-30bc-4af2-aa45-16b8b793359c/volumes" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.530435 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4"] Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531358 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531374 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531394 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531400 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531424 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="util" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531430 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="util" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531441 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="extract" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531446 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="extract" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531457 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="pull" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531463 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="pull" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531646 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531663 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531684 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="extract" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.532393 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.533960 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wttfk" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.534197 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.536838 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.557780 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.655435 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.656759 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/be7dff91-b79d-4a99-a43b-9cc4a9894cda-kube-api-access-nqqsx\") pod \"obo-prometheus-operator-68bc856cb9-t8db4\" (UID: \"be7dff91-b79d-4a99-a43b-9cc4a9894cda\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.660799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.663154 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.663388 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qkjpq" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.708479 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.728953 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.730566 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.758200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.758295 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/be7dff91-b79d-4a99-a43b-9cc4a9894cda-kube-api-access-nqqsx\") pod \"obo-prometheus-operator-68bc856cb9-t8db4\" (UID: \"be7dff91-b79d-4a99-a43b-9cc4a9894cda\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.758324 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.793345 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/be7dff91-b79d-4a99-a43b-9cc4a9894cda-kube-api-access-nqqsx\") pod \"obo-prometheus-operator-68bc856cb9-t8db4\" (UID: \"be7dff91-b79d-4a99-a43b-9cc4a9894cda\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.807387 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.856004 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861112 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861216 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861325 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.869533 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.869705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.894929 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5c445"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.896507 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.903094 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-nd58h" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.903290 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.921891 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5c445"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.963304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.963625 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.969002 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.971604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.037766 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.065617 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7lj\" (UniqueName: \"kubernetes.io/projected/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-kube-api-access-sm7lj\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.065914 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.088337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.100263 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-99mbt"] Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.101745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.121160 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-gq6rn" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.146174 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-99mbt"] Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.167942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6sk\" (UniqueName: \"kubernetes.io/projected/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-kube-api-access-lt6sk\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.168110 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7lj\" (UniqueName: \"kubernetes.io/projected/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-kube-api-access-sm7lj\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.168128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.168151 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.172439 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.210857 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7lj\" (UniqueName: \"kubernetes.io/projected/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-kube-api-access-sm7lj\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.273183 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6sk\" (UniqueName: \"kubernetes.io/projected/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-kube-api-access-lt6sk\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.273297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.274327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.321785 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6sk\" (UniqueName: \"kubernetes.io/projected/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-kube-api-access-lt6sk\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.335530 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.462508 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.915864 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d"] Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.105264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" event={"ID":"800342ba-21de-4a0e-849e-695bd71885b9","Type":"ContainerStarted","Data":"184cb0d3ac473e01471faeff93078f8fe818c9c83051279b4dfdeb8b9d1a5b27"} Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.166543 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w"] Jan 30 23:22:54 crc kubenswrapper[4979]: W0130 23:22:54.180019 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c76d26_1e50_4da5_8774_dde557bb1c50.slice/crio-73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a WatchSource:0}: Error finding container 73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a: Status 404 returned error can't find the container with id 73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.231119 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4"] Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.246517 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5c445"] Jan 30 23:22:54 crc kubenswrapper[4979]: W0130 23:22:54.249680 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc019a415_f4ef_48f7_a0ce_0ee2e2fc95f9.slice/crio-0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6 WatchSource:0}: Error finding container 0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6: Status 404 returned error can't find the container with id 0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6 Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.297625 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-99mbt"] Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.063291 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.088648 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.157294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" event={"ID":"be7dff91-b79d-4a99-a43b-9cc4a9894cda","Type":"ContainerStarted","Data":"2cce48b119f73338cd78cced7bb374b4cfc8f001b745ea226d8cd44cc19b39f7"} Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.162256 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" event={"ID":"a0c76d26-1e50-4da5-8774-dde557bb1c50","Type":"ContainerStarted","Data":"73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a"} Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.164244 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5c445" event={"ID":"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9","Type":"ContainerStarted","Data":"0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6"} Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.165732 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" event={"ID":"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca","Type":"ContainerStarted","Data":"6d2f417dd431364dd49c5672c9d5eeb0b82a54d5beac17187fab398018064105"} Jan 30 23:22:57 crc kubenswrapper[4979]: I0130 23:22:57.092694 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" path="/var/lib/kubelet/pods/b5b42fc3-64fe-40f2-9de5-b6f80489c601/volumes" Jan 30 23:23:02 crc kubenswrapper[4979]: I0130 23:23:02.039297 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:23:02 crc kubenswrapper[4979]: I0130 23:23:02.052555 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.385706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" event={"ID":"800342ba-21de-4a0e-849e-695bd71885b9","Type":"ContainerStarted","Data":"e27d1464fb0f0f3f8a02a3293215a1649986ae0e91f5d79cb11142c8e2f2cb19"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.388278 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" event={"ID":"be7dff91-b79d-4a99-a43b-9cc4a9894cda","Type":"ContainerStarted","Data":"9e4e0d74d67c6933b367d4962cb1155e62a68d91f9f929084b356eb71d38260b"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.389958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" event={"ID":"a0c76d26-1e50-4da5-8774-dde557bb1c50","Type":"ContainerStarted","Data":"c2accf9c6c3e4475e2b628c81f912c05b9e1fc444888309f0d2cdd86132df167"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.391668 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5c445" event={"ID":"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9","Type":"ContainerStarted","Data":"ab6c9339b72856d3c3fe06f9d836c29dd69fded65feec391e2c7829ba17f4943"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.391869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.393379 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" event={"ID":"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca","Type":"ContainerStarted","Data":"308e594958b0e32af33eb7c068568448825b90625a0d9a2736b82eb4c2f84662"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.393861 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.398145 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.417168 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" podStartSLOduration=3.373339584 podStartE2EDuration="16.417149864s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:53.921024781 +0000 UTC m=+6169.882271844" lastFinishedPulling="2026-01-30 23:23:06.964835091 +0000 UTC m=+6182.926082124" observedRunningTime="2026-01-30 23:23:08.411812689 +0000 UTC m=+6184.373059722" watchObservedRunningTime="2026-01-30 23:23:08.417149864 +0000 UTC m=+6184.378396897" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.471920 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-5c445" podStartSLOduration=3.763435473 podStartE2EDuration="16.471897156s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.257217561 +0000 UTC m=+6170.218464594" lastFinishedPulling="2026-01-30 23:23:06.965679244 +0000 UTC m=+6182.926926277" observedRunningTime="2026-01-30 23:23:08.446676913 +0000 UTC m=+6184.407923956" watchObservedRunningTime="2026-01-30 23:23:08.471897156 +0000 UTC m=+6184.433144179" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.473935 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" podStartSLOduration=3.737514503 podStartE2EDuration="16.473925291s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.182575551 +0000 UTC m=+6170.143822584" lastFinishedPulling="2026-01-30 23:23:06.918986329 +0000 UTC m=+6182.880233372" observedRunningTime="2026-01-30 23:23:08.470979952 +0000 UTC m=+6184.432226985" watchObservedRunningTime="2026-01-30 23:23:08.473925291 +0000 UTC m=+6184.435172324" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.511225 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" podStartSLOduration=3.850919453 podStartE2EDuration="16.51120241s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.25786906 +0000 UTC m=+6170.219116093" lastFinishedPulling="2026-01-30 23:23:06.918152017 +0000 UTC m=+6182.879399050" observedRunningTime="2026-01-30 23:23:08.504023265 +0000 UTC m=+6184.465270298" watchObservedRunningTime="2026-01-30 23:23:08.51120241 +0000 UTC m=+6184.472449443" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.538797 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" podStartSLOduration=2.536009247 podStartE2EDuration="15.538779176s" podCreationTimestamp="2026-01-30 23:22:53 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.281506159 +0000 UTC m=+6170.242753192" lastFinishedPulling="2026-01-30 23:23:07.284276088 +0000 UTC m=+6183.245523121" observedRunningTime="2026-01-30 23:23:08.53260105 +0000 UTC m=+6184.493848103" watchObservedRunningTime="2026-01-30 23:23:08.538779176 +0000 UTC m=+6184.500026209" Jan 30 23:23:13 crc kubenswrapper[4979]: I0130 23:23:13.467419 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:23:15 crc kubenswrapper[4979]: I0130 23:23:15.988902 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:15 crc kubenswrapper[4979]: I0130 23:23:15.989468 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" containerID="cri-o://9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464" gracePeriod=2 Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.005137 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.045442 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:16 crc kubenswrapper[4979]: E0130 23:23:16.045843 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.045856 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.046063 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.046665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.058521 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" podUID="278b06cd-52af-4fce-b0e8-fd7f870b0564" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.076812 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.112213 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.112268 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzvx\" (UniqueName: \"kubernetes.io/projected/278b06cd-52af-4fce-b0e8-fd7f870b0564-kube-api-access-mpzvx\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.112315 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config-secret\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.214230 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.214566 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzvx\" (UniqueName: \"kubernetes.io/projected/278b06cd-52af-4fce-b0e8-fd7f870b0564-kube-api-access-mpzvx\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.214617 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config-secret\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.215604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.222860 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config-secret\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.248638 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzvx\" (UniqueName: \"kubernetes.io/projected/278b06cd-52af-4fce-b0e8-fd7f870b0564-kube-api-access-mpzvx\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.287532 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.288998 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.292791 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-d97wm" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.312046 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.382665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.417173 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5f2l\" (UniqueName: \"kubernetes.io/projected/0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7-kube-api-access-j5f2l\") pod \"kube-state-metrics-0\" (UID: \"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7\") " pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.519185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5f2l\" (UniqueName: \"kubernetes.io/projected/0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7-kube-api-access-j5f2l\") pod \"kube-state-metrics-0\" (UID: \"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7\") " pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.552765 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5f2l\" (UniqueName: \"kubernetes.io/projected/0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7-kube-api-access-j5f2l\") pod \"kube-state-metrics-0\" (UID: \"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7\") " pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.638602 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.109751 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.115914 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121470 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121503 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-ftqfx" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121650 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121729 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.122231 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.166449 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240709 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240753 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.241025 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.241155 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km7m\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-kube-api-access-8km7m\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.342998 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343075 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343104 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343181 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343200 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km7m\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-kube-api-access-8km7m\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.352485 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.361275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.361936 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.375579 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.421381 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.422515 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.422531 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km7m\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-kube-api-access-8km7m\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.480592 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.559610 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.599548 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.764802 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.788718 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.816001 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.817968 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818564 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818709 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818847 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.819351 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.819578 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2q5fd" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.828485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980611 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980834 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980892 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980936 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980962 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981096 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9vg\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-kube-api-access-mh9vg\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981133 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981160 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981184 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981213 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0f8756ad-bff0-4f0d-9444-cbba47490d33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.083714 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.083972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.083996 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084058 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0f8756ad-bff0-4f0d-9444-cbba47490d33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084127 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084195 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084236 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9vg\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-kube-api-access-mh9vg\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.085590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.095025 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0f8756ad-bff0-4f0d-9444-cbba47490d33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.091208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.101391 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.102966 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.103641 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.110466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.111741 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.131044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9vg\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-kube-api-access-mh9vg\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.133731 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.133754 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a23f2fd2ff538bdecbc4c91851ecb9066558d51c0e562ef2c378e987d2b8cc0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.300641 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.391540 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.597482 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.604547 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"278b06cd-52af-4fce-b0e8-fd7f870b0564","Type":"ContainerStarted","Data":"d240dff7ace3f094e8522777acd66aac8012edbef9a5c5c5c3d31b11353c6bd5"} Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.604588 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"278b06cd-52af-4fce-b0e8-fd7f870b0564","Type":"ContainerStarted","Data":"5892838ac5cf221a39b5e98f551dff5599da70f484db1eaefba3d9881280020d"} Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.607977 4979 generic.go:334] "Generic (PLEG): container finished" podID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerID="9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464" exitCode=137 Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.608135 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b35f115458eae51c09b989e0ed88002066967bab95f54ae46481d8d55d31f85" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.629369 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7","Type":"ContainerStarted","Data":"aca4677357894a4aac4be38157885c66bf3a7a826c621218a8f7a9cab703a701"} Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.649593 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.667489 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.667465355 podStartE2EDuration="2.667465355s" podCreationTimestamp="2026-01-30 23:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:23:18.637748091 +0000 UTC m=+6194.598995124" watchObservedRunningTime="2026-01-30 23:23:18.667465355 +0000 UTC m=+6194.628712388" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.721778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.721824 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.722008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.757264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn" (OuterVolumeSpecName: "kube-api-access-k49pn") pod "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" (UID: "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e"). InnerVolumeSpecName "kube-api-access-k49pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.757671 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" (UID: "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.815907 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" (UID: "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.824986 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") on node \"crc\" DevicePath \"\"" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.825088 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.825100 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.081642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" path="/var/lib/kubelet/pods/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e/volumes" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.102997 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 23:23:19 crc kubenswrapper[4979]: W0130 23:23:19.112913 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f8756ad_bff0_4f0d_9444_cbba47490d33.slice/crio-bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518 WatchSource:0}: Error finding container bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518: Status 404 returned error can't find the container with id bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518 Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.640085 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7","Type":"ContainerStarted","Data":"6173839959b57672d2890be304db629e4b5e96537800946204cf9c170f14c076"} Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.640169 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.641550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"09db8213f0273319e87bfc501570cb8c81af85e0a51bbd2c262a5c8da7726a77"} Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.642700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518"} Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.642732 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.657747 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.120085417 podStartE2EDuration="3.657725301s" podCreationTimestamp="2026-01-30 23:23:16 +0000 UTC" firstStartedPulling="2026-01-30 23:23:17.952774989 +0000 UTC m=+6193.914022022" lastFinishedPulling="2026-01-30 23:23:18.490414883 +0000 UTC m=+6194.451661906" observedRunningTime="2026-01-30 23:23:19.654587686 +0000 UTC m=+6195.615834739" watchObservedRunningTime="2026-01-30 23:23:19.657725301 +0000 UTC m=+6195.618972334" Jan 30 23:23:25 crc kubenswrapper[4979]: I0130 23:23:25.709419 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"33c4b6ff96bc3f1b90e284163c809c2e1e944235ffd2f36009d4065dac7cfeb1"} Jan 30 23:23:25 crc kubenswrapper[4979]: I0130 23:23:25.711548 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"96aa20f913f179c6cd530189409e8a61fbd3c5178ad76ce5cbe5a45d74822b32"} Jan 30 23:23:26 crc kubenswrapper[4979]: I0130 23:23:26.643907 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.404615 4979 scope.go:117] "RemoveContainer" containerID="1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.433007 4979 scope.go:117] "RemoveContainer" containerID="a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.488685 4979 scope.go:117] "RemoveContainer" containerID="9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.547513 4979 scope.go:117] "RemoveContainer" containerID="e78c967f90d787e6a500755dd51462d00698c1a63f9294556b2308f1758c7a1f" Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.784181 4979 generic.go:334] "Generic (PLEG): container finished" podID="0f8756ad-bff0-4f0d-9444-cbba47490d33" containerID="96aa20f913f179c6cd530189409e8a61fbd3c5178ad76ce5cbe5a45d74822b32" exitCode=0 Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.784505 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerDied","Data":"96aa20f913f179c6cd530189409e8a61fbd3c5178ad76ce5cbe5a45d74822b32"} Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.790613 4979 generic.go:334] "Generic (PLEG): container finished" podID="accadf60-186b-408a-94cb-aae9319d58e9" containerID="33c4b6ff96bc3f1b90e284163c809c2e1e944235ffd2f36009d4065dac7cfeb1" exitCode=0 Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.790676 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerDied","Data":"33c4b6ff96bc3f1b90e284163c809c2e1e944235ffd2f36009d4065dac7cfeb1"} Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.039653 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.040014 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.040090 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.041407 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.041468 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83" gracePeriod=600 Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802263 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83" exitCode=0 Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802306 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83"} Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9"} Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802621 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:23:35 crc kubenswrapper[4979]: I0130 23:23:35.851327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"513a5b81f34ba40b20d5fcc882ca382de5ecfa8fbe84c38d5996392e2cfb2bac"} Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.882385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"da65aada1ec72cd982291d640db937683e25de5a43b382b86e1471d87e0e99f4"} Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.883802 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.886326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.886545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"ddceb57f75765315231851d86a0c073f73d7d3cae609276a6315a5f5bb73a71c"} Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.908802 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.544791042 podStartE2EDuration="21.908781418s" podCreationTimestamp="2026-01-30 23:23:17 +0000 UTC" firstStartedPulling="2026-01-30 23:23:18.649403196 +0000 UTC m=+6194.610650229" lastFinishedPulling="2026-01-30 23:23:35.013393572 +0000 UTC m=+6210.974640605" observedRunningTime="2026-01-30 23:23:38.906435194 +0000 UTC m=+6214.867682227" watchObservedRunningTime="2026-01-30 23:23:38.908781418 +0000 UTC m=+6214.870028471" Jan 30 23:23:41 crc kubenswrapper[4979]: I0130 23:23:41.914679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"37251a169656347800b4f36930cf604f42b2fa45769ca9875e7c6e7b7255aa5d"} Jan 30 23:23:44 crc kubenswrapper[4979]: I0130 23:23:44.948716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"7728e5c4dc7ca32a64f3490f16a2204453ccc0c0062d17886857913717d65413"} Jan 30 23:23:44 crc kubenswrapper[4979]: I0130 23:23:44.976531 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.483412473 podStartE2EDuration="28.976510988s" podCreationTimestamp="2026-01-30 23:23:16 +0000 UTC" firstStartedPulling="2026-01-30 23:23:19.116427979 +0000 UTC m=+6195.077675012" lastFinishedPulling="2026-01-30 23:23:44.609526494 +0000 UTC m=+6220.570773527" observedRunningTime="2026-01-30 23:23:44.96840707 +0000 UTC m=+6220.929654103" watchObservedRunningTime="2026-01-30 23:23:44.976510988 +0000 UTC m=+6220.937758021" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.392957 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.393513 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.395091 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.992988 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.618347 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.621101 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.623722 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.623726 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.642284 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774371 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-scripts\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-run-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774568 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-log-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774617 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz8f\" (UniqueName: \"kubernetes.io/projected/05655350-25f6-4610-9ec7-f492b4691d5d-kube-api-access-hlz8f\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.775080 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.775152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-config-data\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.876860 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-run-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.876924 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-log-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.876955 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlz8f\" (UniqueName: \"kubernetes.io/projected/05655350-25f6-4610-9ec7-f492b4691d5d-kube-api-access-hlz8f\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877191 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-config-data\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877225 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-scripts\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-run-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.878026 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-log-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.883612 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.884920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-config-data\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.885210 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-scripts\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.889235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.900276 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlz8f\" (UniqueName: \"kubernetes.io/projected/05655350-25f6-4610-9ec7-f492b4691d5d-kube-api-access-hlz8f\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.940324 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 23:23:50 crc kubenswrapper[4979]: W0130 23:23:50.569580 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05655350_25f6_4610_9ec7_f492b4691d5d.slice/crio-86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7 WatchSource:0}: Error finding container 86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7: Status 404 returned error can't find the container with id 86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7 Jan 30 23:23:50 crc kubenswrapper[4979]: I0130 23:23:50.580520 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 23:23:51 crc kubenswrapper[4979]: I0130 23:23:51.029168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7"} Jan 30 23:23:52 crc kubenswrapper[4979]: I0130 23:23:52.055744 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"f3d07368e153a5c55121bdd6fa9cc11231a2103541178316393cdf3221172cf8"} Jan 30 23:23:52 crc kubenswrapper[4979]: I0130 23:23:52.056134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"f3670ae1c4492b55077f3979bab9afa417fb637d5d218f4e539df44981ac2af9"} Jan 30 23:23:53 crc kubenswrapper[4979]: I0130 23:23:53.085919 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"adc05dfc2ac8ea72d88dd757977ad961c2c97ffd1a811db9c6cb5c5ec18cd573"} Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.068462 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.092627 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.094282 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.111097 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.132412 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.152861 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.162171 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.196619 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.208363 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.213802 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.051235 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.063509 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.132373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"2654e5763661d4a18b2c68a556c110af31a979faaa8d34ded4ff755047de163d"} Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.134130 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.083893 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" path="/var/lib/kubelet/pods/0010c53f-b0a4-44bd-9178-bbd2941973ff/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.084860 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" path="/var/lib/kubelet/pods/09fb7fe9-97f7-4af9-897c-e4fb6f234c79/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.085458 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" path="/var/lib/kubelet/pods/20a160f3-ed61-481d-be84-cdc6c7b6097a/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.086104 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" path="/var/lib/kubelet/pods/28312ce4-d376-4d84-9aea-175ee095e2ce/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.087159 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" path="/var/lib/kubelet/pods/8f726869-e2f9-4a3b-b40a-236ad3a8566c/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.088372 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" path="/var/lib/kubelet/pods/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd/volumes" Jan 30 23:24:04 crc kubenswrapper[4979]: I0130 23:24:04.046022 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=10.45334335 podStartE2EDuration="15.045997341s" podCreationTimestamp="2026-01-30 23:23:49 +0000 UTC" firstStartedPulling="2026-01-30 23:23:50.572087718 +0000 UTC m=+6226.533334751" lastFinishedPulling="2026-01-30 23:23:55.164741709 +0000 UTC m=+6231.125988742" observedRunningTime="2026-01-30 23:23:56.160841903 +0000 UTC m=+6232.122088946" watchObservedRunningTime="2026-01-30 23:24:04.045997341 +0000 UTC m=+6240.007244384" Jan 30 23:24:04 crc kubenswrapper[4979]: I0130 23:24:04.049389 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:24:04 crc kubenswrapper[4979]: I0130 23:24:04.064198 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:24:05 crc kubenswrapper[4979]: I0130 23:24:05.097009 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" path="/var/lib/kubelet/pods/498ed84d-af03-4ccb-bc46-3d1f8ca8861a/volumes" Jan 30 23:24:19 crc kubenswrapper[4979]: I0130 23:24:19.953084 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 23:24:23 crc kubenswrapper[4979]: I0130 23:24:23.040255 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:24:23 crc kubenswrapper[4979]: I0130 23:24:23.052267 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:24:23 crc kubenswrapper[4979]: I0130 23:24:23.084292 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" path="/var/lib/kubelet/pods/e541a45b-949e-42d3-bbbd-b7fcf76ae045/volumes" Jan 30 23:24:24 crc kubenswrapper[4979]: I0130 23:24:24.035264 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:24:24 crc kubenswrapper[4979]: I0130 23:24:24.046318 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:24:25 crc kubenswrapper[4979]: I0130 23:24:25.092624 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" path="/var/lib/kubelet/pods/39641496-4ab5-48e9-98bf-5627a0a79411/volumes" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.695386 4979 scope.go:117] "RemoveContainer" containerID="c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.761240 4979 scope.go:117] "RemoveContainer" containerID="dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.793345 4979 scope.go:117] "RemoveContainer" containerID="3d49f76579ebce159dde4f7f8e10b1d7dd782ed39ac26b0b2a652ca85113974a" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.841292 4979 scope.go:117] "RemoveContainer" containerID="27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.886060 4979 scope.go:117] "RemoveContainer" containerID="27746524c4c68ca5b766ef144aa2b7cd8bd00780eefec84e45e51a6c155cf253" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.929215 4979 scope.go:117] "RemoveContainer" containerID="f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.986173 4979 scope.go:117] "RemoveContainer" containerID="d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af" Jan 30 23:24:30 crc kubenswrapper[4979]: I0130 23:24:30.012335 4979 scope.go:117] "RemoveContainer" containerID="658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf" Jan 30 23:24:30 crc kubenswrapper[4979]: I0130 23:24:30.042798 4979 scope.go:117] "RemoveContainer" containerID="95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd" Jan 30 23:24:37 crc kubenswrapper[4979]: I0130 23:24:37.054071 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:24:37 crc kubenswrapper[4979]: I0130 23:24:37.064558 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:24:37 crc kubenswrapper[4979]: I0130 23:24:37.084000 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" path="/var/lib/kubelet/pods/a0c7f950-be1a-4557-8548-d41ac49e8010/volumes" Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.044683 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.054496 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.064527 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.083206 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" path="/var/lib/kubelet/pods/9737fb48-932e-4216-a323-0fa11a0a136d/volumes" Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.083920 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:25:23 crc kubenswrapper[4979]: I0130 23:25:23.081682 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" path="/var/lib/kubelet/pods/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed/volumes" Jan 30 23:25:29 crc kubenswrapper[4979]: I0130 23:25:29.045632 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:25:29 crc kubenswrapper[4979]: I0130 23:25:29.054355 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:25:29 crc kubenswrapper[4979]: I0130 23:25:29.081925 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" path="/var/lib/kubelet/pods/f36c73f1-9737-467c-a014-5ac45eb3f512/volumes" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.234475 4979 scope.go:117] "RemoveContainer" containerID="2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.273071 4979 scope.go:117] "RemoveContainer" containerID="d7d84d9b6f642570ec9f0833c3f37b449071bcc3ab74fb1efbfc67cb25be27a7" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.309735 4979 scope.go:117] "RemoveContainer" containerID="b2aed671841955c62444becfeabff7ccb5bcd0fdccfa5d1f4e24c893f848c58c" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.354342 4979 scope.go:117] "RemoveContainer" containerID="41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.415319 4979 scope.go:117] "RemoveContainer" containerID="33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd" Jan 30 23:25:32 crc kubenswrapper[4979]: I0130 23:25:32.039433 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:25:32 crc kubenswrapper[4979]: I0130 23:25:32.039701 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:26:02 crc kubenswrapper[4979]: I0130 23:26:02.039938 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:26:02 crc kubenswrapper[4979]: I0130 23:26:02.040461 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.039898 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.042002 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.042274 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.043547 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.043754 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" gracePeriod=600 Jan 30 23:26:32 crc kubenswrapper[4979]: E0130 23:26:32.168258 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.914509 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" exitCode=0 Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.914625 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9"} Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.915090 4979 scope.go:117] "RemoveContainer" containerID="ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.916107 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:26:32 crc kubenswrapper[4979]: E0130 23:26:32.916836 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:26:47 crc kubenswrapper[4979]: I0130 23:26:47.081443 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:26:47 crc kubenswrapper[4979]: E0130 23:26:47.083587 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:01 crc kubenswrapper[4979]: I0130 23:27:01.069766 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:01 crc kubenswrapper[4979]: E0130 23:27:01.070492 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:13 crc kubenswrapper[4979]: I0130 23:27:13.070544 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:13 crc kubenswrapper[4979]: E0130 23:27:13.071471 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:27 crc kubenswrapper[4979]: I0130 23:27:27.070754 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:27 crc kubenswrapper[4979]: E0130 23:27:27.071555 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:40 crc kubenswrapper[4979]: I0130 23:27:40.070427 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:40 crc kubenswrapper[4979]: E0130 23:27:40.071241 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:51 crc kubenswrapper[4979]: I0130 23:27:51.070299 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:51 crc kubenswrapper[4979]: E0130 23:27:51.071503 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:04 crc kubenswrapper[4979]: I0130 23:28:04.070143 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:04 crc kubenswrapper[4979]: E0130 23:28:04.071090 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:07 crc kubenswrapper[4979]: I0130 23:28:07.058606 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:28:07 crc kubenswrapper[4979]: I0130 23:28:07.097655 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.041735 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.056797 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.085434 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" path="/var/lib/kubelet/pods/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd/volumes" Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.085992 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" path="/var/lib/kubelet/pods/cad393e9-51ee-4f44-976c-fb9c28487d67/volumes" Jan 30 23:28:15 crc kubenswrapper[4979]: I0130 23:28:15.049206 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:28:15 crc kubenswrapper[4979]: I0130 23:28:15.063805 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:28:15 crc kubenswrapper[4979]: I0130 23:28:15.084161 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" path="/var/lib/kubelet/pods/5d8f6093-1ce3-4cb4-829a-71a3aaded46f/volumes" Jan 30 23:28:16 crc kubenswrapper[4979]: I0130 23:28:16.038881 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:28:16 crc kubenswrapper[4979]: I0130 23:28:16.052466 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:28:17 crc kubenswrapper[4979]: I0130 23:28:17.069844 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:17 crc kubenswrapper[4979]: E0130 23:28:17.070343 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:17 crc kubenswrapper[4979]: I0130 23:28:17.081611 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" path="/var/lib/kubelet/pods/9549a4c7-2fb8-4f18-a7d3-902949e90d8c/volumes" Jan 30 23:28:28 crc kubenswrapper[4979]: I0130 23:28:28.069575 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:28 crc kubenswrapper[4979]: E0130 23:28:28.070644 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.606388 4979 scope.go:117] "RemoveContainer" containerID="4585a42ea864cc4af87b4f754b0c7b9540e84f1af59fb62e004a04f42ca82ee5" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.634335 4979 scope.go:117] "RemoveContainer" containerID="295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.707166 4979 scope.go:117] "RemoveContainer" containerID="2901952f949f2b6e5bf0bdfc295d7dcb142b237e525207eca8287fadd9dc45a0" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.768720 4979 scope.go:117] "RemoveContainer" containerID="ab9d6fd9b6c78c1609831430497301a395dbc97dc2a1cc5b8ce36db173127e64" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.403607 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.409745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.414595 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.582521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.583668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.583813 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.686380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.686435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.686546 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.687185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.687309 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.718988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.748286 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:38 crc kubenswrapper[4979]: I0130 23:28:38.427005 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.355832 4979 generic.go:334] "Generic (PLEG): container finished" podID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" exitCode=0 Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.355938 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707"} Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.356341 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerStarted","Data":"3ec55b0e60daf8ca977cc638e8e02fdff79be6b2603eb72f1beeb983901fb590"} Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.360609 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:28:40 crc kubenswrapper[4979]: I0130 23:28:40.072530 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:40 crc kubenswrapper[4979]: E0130 23:28:40.073804 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:40 crc kubenswrapper[4979]: I0130 23:28:40.371779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerStarted","Data":"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165"} Jan 30 23:28:42 crc kubenswrapper[4979]: I0130 23:28:42.403135 4979 generic.go:334] "Generic (PLEG): container finished" podID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" exitCode=0 Jan 30 23:28:42 crc kubenswrapper[4979]: I0130 23:28:42.403243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165"} Jan 30 23:28:43 crc kubenswrapper[4979]: I0130 23:28:43.415689 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerStarted","Data":"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8"} Jan 30 23:28:43 crc kubenswrapper[4979]: I0130 23:28:43.433964 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d6jhp" podStartSLOduration=2.948287778 podStartE2EDuration="6.433940902s" podCreationTimestamp="2026-01-30 23:28:37 +0000 UTC" firstStartedPulling="2026-01-30 23:28:39.360189968 +0000 UTC m=+6515.321437041" lastFinishedPulling="2026-01-30 23:28:42.845843132 +0000 UTC m=+6518.807090165" observedRunningTime="2026-01-30 23:28:43.431524527 +0000 UTC m=+6519.392771560" watchObservedRunningTime="2026-01-30 23:28:43.433940902 +0000 UTC m=+6519.395187945" Jan 30 23:28:47 crc kubenswrapper[4979]: I0130 23:28:47.748738 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:47 crc kubenswrapper[4979]: I0130 23:28:47.749403 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:47 crc kubenswrapper[4979]: I0130 23:28:47.821156 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:48 crc kubenswrapper[4979]: I0130 23:28:48.524260 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:48 crc kubenswrapper[4979]: I0130 23:28:48.586431 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:50 crc kubenswrapper[4979]: I0130 23:28:50.490536 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d6jhp" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" containerID="cri-o://5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" gracePeriod=2 Jan 30 23:28:50 crc kubenswrapper[4979]: I0130 23:28:50.970713 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.014153 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.014306 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.014580 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.017798 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities" (OuterVolumeSpecName: "utilities") pod "c9eb63e6-7657-4db8-90c6-f18b77fb3adc" (UID: "c9eb63e6-7657-4db8-90c6-f18b77fb3adc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.024516 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx" (OuterVolumeSpecName: "kube-api-access-sbkcx") pod "c9eb63e6-7657-4db8-90c6-f18b77fb3adc" (UID: "c9eb63e6-7657-4db8-90c6-f18b77fb3adc"). InnerVolumeSpecName "kube-api-access-sbkcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.084196 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9eb63e6-7657-4db8-90c6-f18b77fb3adc" (UID: "c9eb63e6-7657-4db8-90c6-f18b77fb3adc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.117807 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") on node \"crc\" DevicePath \"\"" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.117852 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.117865 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504462 4979 generic.go:334] "Generic (PLEG): container finished" podID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" exitCode=0 Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504512 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8"} Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"3ec55b0e60daf8ca977cc638e8e02fdff79be6b2603eb72f1beeb983901fb590"} Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504610 4979 scope.go:117] "RemoveContainer" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504611 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.552735 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.565219 4979 scope.go:117] "RemoveContainer" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.566393 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.601781 4979 scope.go:117] "RemoveContainer" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.637446 4979 scope.go:117] "RemoveContainer" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" Jan 30 23:28:51 crc kubenswrapper[4979]: E0130 23:28:51.637863 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8\": container with ID starting with 5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8 not found: ID does not exist" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638010 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8"} err="failed to get container status \"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8\": rpc error: code = NotFound desc = could not find container \"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8\": container with ID starting with 5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8 not found: ID does not exist" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638208 4979 scope.go:117] "RemoveContainer" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" Jan 30 23:28:51 crc kubenswrapper[4979]: E0130 23:28:51.638748 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165\": container with ID starting with efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165 not found: ID does not exist" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638778 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165"} err="failed to get container status \"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165\": rpc error: code = NotFound desc = could not find container \"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165\": container with ID starting with efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165 not found: ID does not exist" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638801 4979 scope.go:117] "RemoveContainer" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" Jan 30 23:28:51 crc kubenswrapper[4979]: E0130 23:28:51.639131 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707\": container with ID starting with 857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707 not found: ID does not exist" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.639154 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707"} err="failed to get container status \"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707\": rpc error: code = NotFound desc = could not find container \"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707\": container with ID starting with 857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707 not found: ID does not exist" Jan 30 23:28:53 crc kubenswrapper[4979]: I0130 23:28:53.071093 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:53 crc kubenswrapper[4979]: E0130 23:28:53.072614 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:53 crc kubenswrapper[4979]: I0130 23:28:53.086007 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" path="/var/lib/kubelet/pods/c9eb63e6-7657-4db8-90c6-f18b77fb3adc/volumes" Jan 30 23:29:05 crc kubenswrapper[4979]: I0130 23:29:05.083243 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:05 crc kubenswrapper[4979]: E0130 23:29:05.084965 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:29:14 crc kubenswrapper[4979]: I0130 23:29:14.062656 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:29:14 crc kubenswrapper[4979]: I0130 23:29:14.071952 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:29:15 crc kubenswrapper[4979]: I0130 23:29:15.086297 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" path="/var/lib/kubelet/pods/b39f85e7-5ff3-4843-87ca-0eaa482d5107/volumes" Jan 30 23:29:20 crc kubenswrapper[4979]: I0130 23:29:20.070265 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:20 crc kubenswrapper[4979]: E0130 23:29:20.071071 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:29:30 crc kubenswrapper[4979]: I0130 23:29:30.935633 4979 scope.go:117] "RemoveContainer" containerID="ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91" Jan 30 23:29:30 crc kubenswrapper[4979]: I0130 23:29:30.977968 4979 scope.go:117] "RemoveContainer" containerID="ccc43b745db314daf28ae463940cf548663352e7673aec67c6df25622cd0610d" Jan 30 23:29:35 crc kubenswrapper[4979]: I0130 23:29:35.084528 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:35 crc kubenswrapper[4979]: E0130 23:29:35.085551 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:29:47 crc kubenswrapper[4979]: I0130 23:29:47.070343 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:47 crc kubenswrapper[4979]: E0130 23:29:47.071494 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.151181 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv"] Jan 30 23:30:00 crc kubenswrapper[4979]: E0130 23:30:00.152921 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-utilities" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.152954 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-utilities" Jan 30 23:30:00 crc kubenswrapper[4979]: E0130 23:30:00.153085 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.153136 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" Jan 30 23:30:00 crc kubenswrapper[4979]: E0130 23:30:00.153171 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-content" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.153193 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-content" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.153654 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.155421 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.158054 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.158831 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.161266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv"] Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.200649 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.200910 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.201479 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.303922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.303995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.304149 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.304756 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.310811 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.319910 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.483923 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.910241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv"] Jan 30 23:30:00 crc kubenswrapper[4979]: W0130 23:30:00.918009 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff10c30_8e1b_457e_8e33_ab3d23c24bf9.slice/crio-b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc WatchSource:0}: Error finding container b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc: Status 404 returned error can't find the container with id b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc Jan 30 23:30:01 crc kubenswrapper[4979]: I0130 23:30:01.309173 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerStarted","Data":"acd704330a20ec0f4bf6517deac2d2d7e79559f7a136972b3616d00497d97f95"} Jan 30 23:30:01 crc kubenswrapper[4979]: I0130 23:30:01.309495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerStarted","Data":"b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc"} Jan 30 23:30:01 crc kubenswrapper[4979]: I0130 23:30:01.331665 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" podStartSLOduration=1.331621598 podStartE2EDuration="1.331621598s" podCreationTimestamp="2026-01-30 23:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:30:01.322565682 +0000 UTC m=+6597.283812725" watchObservedRunningTime="2026-01-30 23:30:01.331621598 +0000 UTC m=+6597.292868631" Jan 30 23:30:02 crc kubenswrapper[4979]: I0130 23:30:02.070926 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:02 crc kubenswrapper[4979]: E0130 23:30:02.071533 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:02 crc kubenswrapper[4979]: I0130 23:30:02.334132 4979 generic.go:334] "Generic (PLEG): container finished" podID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerID="acd704330a20ec0f4bf6517deac2d2d7e79559f7a136972b3616d00497d97f95" exitCode=0 Jan 30 23:30:02 crc kubenswrapper[4979]: I0130 23:30:02.334189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerDied","Data":"acd704330a20ec0f4bf6517deac2d2d7e79559f7a136972b3616d00497d97f95"} Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.749474 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.769360 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.769700 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.769848 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.770703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume" (OuterVolumeSpecName: "config-volume") pod "cff10c30-8e1b-457e-8e33-ab3d23c24bf9" (UID: "cff10c30-8e1b-457e-8e33-ab3d23c24bf9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.782381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cff10c30-8e1b-457e-8e33-ab3d23c24bf9" (UID: "cff10c30-8e1b-457e-8e33-ab3d23c24bf9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.782457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b" (OuterVolumeSpecName: "kube-api-access-h598b") pod "cff10c30-8e1b-457e-8e33-ab3d23c24bf9" (UID: "cff10c30-8e1b-457e-8e33-ab3d23c24bf9"). InnerVolumeSpecName "kube-api-access-h598b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.873094 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.873136 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.873150 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.374949 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.374940 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerDied","Data":"b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc"} Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.375090 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc" Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.403714 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.411465 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.086633 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" path="/var/lib/kubelet/pods/104b2fbe-7925-4ef8-afca-adf78844b1e4/volumes" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.143358 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:30:05 crc kubenswrapper[4979]: E0130 23:30:05.143870 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerName="collect-profiles" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.143889 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerName="collect-profiles" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.144112 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerName="collect-profiles" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.145347 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.148608 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hvtn9"/"openshift-service-ca.crt" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.155158 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.187496 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hvtn9"/"default-dockercfg-lj46c" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.187594 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hvtn9"/"kube-root-ca.crt" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.300990 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.301735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.403665 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.403820 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.404161 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.436745 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.484117 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.989078 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:30:06 crc kubenswrapper[4979]: W0130 23:30:06.008635 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9f91df2_3eb9_4624_a492_49e62aa440f5.slice/crio-ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6 WatchSource:0}: Error finding container ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6: Status 404 returned error can't find the container with id ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6 Jan 30 23:30:06 crc kubenswrapper[4979]: I0130 23:30:06.391092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerStarted","Data":"ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6"} Jan 30 23:30:12 crc kubenswrapper[4979]: I0130 23:30:12.450875 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerStarted","Data":"cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70"} Jan 30 23:30:12 crc kubenswrapper[4979]: I0130 23:30:12.451496 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerStarted","Data":"bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd"} Jan 30 23:30:12 crc kubenswrapper[4979]: I0130 23:30:12.480887 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hvtn9/must-gather-w5l49" podStartSLOduration=2.3170153190000002 podStartE2EDuration="7.480859412s" podCreationTimestamp="2026-01-30 23:30:05 +0000 UTC" firstStartedPulling="2026-01-30 23:30:06.0115256 +0000 UTC m=+6601.972772633" lastFinishedPulling="2026-01-30 23:30:11.175369663 +0000 UTC m=+6607.136616726" observedRunningTime="2026-01-30 23:30:12.478622542 +0000 UTC m=+6608.439869625" watchObservedRunningTime="2026-01-30 23:30:12.480859412 +0000 UTC m=+6608.442106475" Jan 30 23:30:15 crc kubenswrapper[4979]: I0130 23:30:15.075899 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:15 crc kubenswrapper[4979]: E0130 23:30:15.076828 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.429984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-2g9nn"] Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.432436 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.463615 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.463807 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.565249 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.565398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.565426 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.592773 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.752557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:17 crc kubenswrapper[4979]: I0130 23:30:17.497842 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" event={"ID":"b309cab1-c68d-4026-ad93-70dbf791d23e","Type":"ContainerStarted","Data":"e7a30b0801f720a139eb91c5a4236357d9c34ac472b30206b2c2ef7e456ce932"} Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.428381 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.432382 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.528360 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.528742 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.528911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.569084 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631998 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.632153 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.670392 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.757922 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:29 crc kubenswrapper[4979]: I0130 23:30:29.070249 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:29 crc kubenswrapper[4979]: E0130 23:30:29.071516 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:29 crc kubenswrapper[4979]: I0130 23:30:29.621123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" event={"ID":"b309cab1-c68d-4026-ad93-70dbf791d23e","Type":"ContainerStarted","Data":"f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925"} Jan 30 23:30:29 crc kubenswrapper[4979]: I0130 23:30:29.640126 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" podStartSLOduration=1.331434525 podStartE2EDuration="13.640110455s" podCreationTimestamp="2026-01-30 23:30:16 +0000 UTC" firstStartedPulling="2026-01-30 23:30:16.793145355 +0000 UTC m=+6612.754392388" lastFinishedPulling="2026-01-30 23:30:29.101821285 +0000 UTC m=+6625.063068318" observedRunningTime="2026-01-30 23:30:29.639778507 +0000 UTC m=+6625.601025540" watchObservedRunningTime="2026-01-30 23:30:29.640110455 +0000 UTC m=+6625.601357488" Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.116665 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:30 crc kubenswrapper[4979]: W0130 23:30:30.121543 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9c11ad_8590_45a5_bff9_9694d99cf407.slice/crio-b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3 WatchSource:0}: Error finding container b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3: Status 404 returned error can't find the container with id b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3 Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.633549 4979 generic.go:334] "Generic (PLEG): container finished" podID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerID="65e223d547000178886f3ab33399241df2f6bc885d382d317198181db61e8b64" exitCode=0 Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.633787 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"65e223d547000178886f3ab33399241df2f6bc885d382d317198181db61e8b64"} Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.634097 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerStarted","Data":"b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3"} Jan 30 23:30:31 crc kubenswrapper[4979]: I0130 23:30:31.111593 4979 scope.go:117] "RemoveContainer" containerID="f4376d94646a15043c11ecee25a291d34f53ab6e158c8bf8bf94d2318ee02027" Jan 30 23:30:31 crc kubenswrapper[4979]: I0130 23:30:31.648447 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerStarted","Data":"293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8"} Jan 30 23:30:33 crc kubenswrapper[4979]: I0130 23:30:33.669303 4979 generic.go:334] "Generic (PLEG): container finished" podID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerID="293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8" exitCode=0 Jan 30 23:30:33 crc kubenswrapper[4979]: I0130 23:30:33.669352 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8"} Jan 30 23:30:34 crc kubenswrapper[4979]: I0130 23:30:34.680880 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerStarted","Data":"da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb"} Jan 30 23:30:34 crc kubenswrapper[4979]: I0130 23:30:34.703916 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6gxx" podStartSLOduration=10.174315246 podStartE2EDuration="13.70389555s" podCreationTimestamp="2026-01-30 23:30:21 +0000 UTC" firstStartedPulling="2026-01-30 23:30:30.636219679 +0000 UTC m=+6626.597466712" lastFinishedPulling="2026-01-30 23:30:34.165799993 +0000 UTC m=+6630.127047016" observedRunningTime="2026-01-30 23:30:34.698407452 +0000 UTC m=+6630.659654485" watchObservedRunningTime="2026-01-30 23:30:34.70389555 +0000 UTC m=+6630.665142583" Jan 30 23:30:41 crc kubenswrapper[4979]: I0130 23:30:41.759252 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:41 crc kubenswrapper[4979]: I0130 23:30:41.759867 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:42 crc kubenswrapper[4979]: I0130 23:30:42.807552 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g6gxx" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" probeResult="failure" output=< Jan 30 23:30:42 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 23:30:42 crc kubenswrapper[4979]: > Jan 30 23:30:43 crc kubenswrapper[4979]: I0130 23:30:43.069484 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:43 crc kubenswrapper[4979]: E0130 23:30:43.070294 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:50 crc kubenswrapper[4979]: I0130 23:30:50.872836 4979 generic.go:334] "Generic (PLEG): container finished" podID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerID="f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925" exitCode=0 Jan 30 23:30:50 crc kubenswrapper[4979]: I0130 23:30:50.872922 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" event={"ID":"b309cab1-c68d-4026-ad93-70dbf791d23e","Type":"ContainerDied","Data":"f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925"} Jan 30 23:30:51 crc kubenswrapper[4979]: I0130 23:30:51.822103 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:51 crc kubenswrapper[4979]: I0130 23:30:51.881819 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:51 crc kubenswrapper[4979]: I0130 23:30:51.994937 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.031726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"b309cab1-c68d-4026-ad93-70dbf791d23e\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.031826 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host" (OuterVolumeSpecName: "host") pod "b309cab1-c68d-4026-ad93-70dbf791d23e" (UID: "b309cab1-c68d-4026-ad93-70dbf791d23e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.031988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"b309cab1-c68d-4026-ad93-70dbf791d23e\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.032512 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-2g9nn"] Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.032802 4979 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.040100 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-2g9nn"] Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.046256 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs" (OuterVolumeSpecName: "kube-api-access-cschs") pod "b309cab1-c68d-4026-ad93-70dbf791d23e" (UID: "b309cab1-c68d-4026-ad93-70dbf791d23e"). InnerVolumeSpecName "kube-api-access-cschs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.133744 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.148696 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:30:52 crc kubenswrapper[4979]: E0130 23:30:52.149156 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerName="container-00" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.149174 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerName="container-00" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.149376 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerName="container-00" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.150780 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.177395 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.235791 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.236112 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.236204 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338021 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338188 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338619 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.361886 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.467707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.911637 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a30b0801f720a139eb91c5a4236357d9c34ac472b30206b2c2ef7e456ce932" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.911690 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.081792 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" path="/var/lib/kubelet/pods/b309cab1-c68d-4026-ad93-70dbf791d23e/volumes" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.171080 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:30:53 crc kubenswrapper[4979]: W0130 23:30:53.192177 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf6ba99_f760_491b_95ed_71ae1b9e34b4.slice/crio-0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1 WatchSource:0}: Error finding container 0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1: Status 404 returned error can't find the container with id 0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1 Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.268217 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-c5qv6"] Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.270077 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.359543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.359628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.460899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.460982 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.461076 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.487938 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.606274 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.921595 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" exitCode=0 Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.922118 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323"} Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.922164 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerStarted","Data":"0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1"} Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.927769 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" event={"ID":"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab","Type":"ContainerStarted","Data":"b2492877ada34dfefd34b3d39354bafbacf62246eb1096e77de813931b9c9bec"} Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.927817 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" event={"ID":"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab","Type":"ContainerStarted","Data":"899a2958689ea9327648a2a88c41b24151aabda29b83cbec186214ba78ebecbf"} Jan 30 23:30:54 crc kubenswrapper[4979]: I0130 23:30:54.008088 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-c5qv6"] Jan 30 23:30:54 crc kubenswrapper[4979]: I0130 23:30:54.028573 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-c5qv6"] Jan 30 23:30:54 crc kubenswrapper[4979]: I0130 23:30:54.957081 4979 generic.go:334] "Generic (PLEG): container finished" podID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerID="b2492877ada34dfefd34b3d39354bafbacf62246eb1096e77de813931b9c9bec" exitCode=1 Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.055626 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.769063 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.769244 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.782220 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host" (OuterVolumeSpecName: "host") pod "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" (UID: "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.810413 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72" (OuterVolumeSpecName: "kube-api-access-qwg72") pod "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" (UID: "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab"). InnerVolumeSpecName "kube-api-access-qwg72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.821364 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.821599 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6gxx" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" containerID="cri-o://da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb" gracePeriod=2 Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.874050 4979 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.874419 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.998939 4979 generic.go:334] "Generic (PLEG): container finished" podID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerID="da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb" exitCode=0 Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.999136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb"} Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.001394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerStarted","Data":"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044"} Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.010703 4979 scope.go:117] "RemoveContainer" containerID="b2492877ada34dfefd34b3d39354bafbacf62246eb1096e77de813931b9c9bec" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.010873 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.069966 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:56 crc kubenswrapper[4979]: E0130 23:30:56.070288 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.376815 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.489789 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"ce9c11ad-8590-45a5-bff9-9694d99cf407\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.489847 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"ce9c11ad-8590-45a5-bff9-9694d99cf407\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.489896 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"ce9c11ad-8590-45a5-bff9-9694d99cf407\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.491233 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities" (OuterVolumeSpecName: "utilities") pod "ce9c11ad-8590-45a5-bff9-9694d99cf407" (UID: "ce9c11ad-8590-45a5-bff9-9694d99cf407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.500347 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5" (OuterVolumeSpecName: "kube-api-access-5qfs5") pod "ce9c11ad-8590-45a5-bff9-9694d99cf407" (UID: "ce9c11ad-8590-45a5-bff9-9694d99cf407"). InnerVolumeSpecName "kube-api-access-5qfs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.542186 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce9c11ad-8590-45a5-bff9-9694d99cf407" (UID: "ce9c11ad-8590-45a5-bff9-9694d99cf407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.592300 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.592337 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.592348 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.022562 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.023775 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3"} Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.023819 4979 scope.go:117] "RemoveContainer" containerID="da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.084687 4979 scope.go:117] "RemoveContainer" containerID="293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.094462 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" path="/var/lib/kubelet/pods/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab/volumes" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.095617 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.103966 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.124563 4979 scope.go:117] "RemoveContainer" containerID="65e223d547000178886f3ab33399241df2f6bc885d382d317198181db61e8b64" Jan 30 23:30:59 crc kubenswrapper[4979]: I0130 23:30:59.085934 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" path="/var/lib/kubelet/pods/ce9c11ad-8590-45a5-bff9-9694d99cf407/volumes" Jan 30 23:31:02 crc kubenswrapper[4979]: I0130 23:31:02.080412 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" exitCode=0 Jan 30 23:31:02 crc kubenswrapper[4979]: I0130 23:31:02.080457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044"} Jan 30 23:31:03 crc kubenswrapper[4979]: I0130 23:31:03.127709 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerStarted","Data":"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672"} Jan 30 23:31:03 crc kubenswrapper[4979]: I0130 23:31:03.154322 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xdjmd" podStartSLOduration=2.556666116 podStartE2EDuration="11.154303849s" podCreationTimestamp="2026-01-30 23:30:52 +0000 UTC" firstStartedPulling="2026-01-30 23:30:53.925255264 +0000 UTC m=+6649.886502297" lastFinishedPulling="2026-01-30 23:31:02.522892997 +0000 UTC m=+6658.484140030" observedRunningTime="2026-01-30 23:31:03.150451414 +0000 UTC m=+6659.111698447" watchObservedRunningTime="2026-01-30 23:31:03.154303849 +0000 UTC m=+6659.115550882" Jan 30 23:31:10 crc kubenswrapper[4979]: I0130 23:31:10.072236 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:31:10 crc kubenswrapper[4979]: E0130 23:31:10.072999 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:31:12 crc kubenswrapper[4979]: I0130 23:31:12.468809 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:12 crc kubenswrapper[4979]: I0130 23:31:12.469190 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:12 crc kubenswrapper[4979]: I0130 23:31:12.519677 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:13 crc kubenswrapper[4979]: I0130 23:31:13.306615 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:13 crc kubenswrapper[4979]: I0130 23:31:13.366043 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.250211 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xdjmd" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" containerID="cri-o://cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" gracePeriod=2 Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.778797 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.940257 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.940496 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.940557 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.941551 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities" (OuterVolumeSpecName: "utilities") pod "ecf6ba99-f760-491b-95ed-71ae1b9e34b4" (UID: "ecf6ba99-f760-491b-95ed-71ae1b9e34b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.950587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq" (OuterVolumeSpecName: "kube-api-access-sq2nq") pod "ecf6ba99-f760-491b-95ed-71ae1b9e34b4" (UID: "ecf6ba99-f760-491b-95ed-71ae1b9e34b4"). InnerVolumeSpecName "kube-api-access-sq2nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.043379 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.043420 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") on node \"crc\" DevicePath \"\"" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.072124 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecf6ba99-f760-491b-95ed-71ae1b9e34b4" (UID: "ecf6ba99-f760-491b-95ed-71ae1b9e34b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.145646 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.293873 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" exitCode=0 Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.294217 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.294273 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672"} Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.297602 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1"} Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.297707 4979 scope.go:117] "RemoveContainer" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.366679 4979 scope.go:117] "RemoveContainer" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.401083 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.413589 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.430981 4979 scope.go:117] "RemoveContainer" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.464399 4979 scope.go:117] "RemoveContainer" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" Jan 30 23:31:16 crc kubenswrapper[4979]: E0130 23:31:16.466655 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672\": container with ID starting with cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672 not found: ID does not exist" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.466687 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672"} err="failed to get container status \"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672\": rpc error: code = NotFound desc = could not find container \"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672\": container with ID starting with cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672 not found: ID does not exist" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.466712 4979 scope.go:117] "RemoveContainer" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" Jan 30 23:31:16 crc kubenswrapper[4979]: E0130 23:31:16.467265 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044\": container with ID starting with 1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044 not found: ID does not exist" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.467425 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044"} err="failed to get container status \"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044\": rpc error: code = NotFound desc = could not find container \"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044\": container with ID starting with 1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044 not found: ID does not exist" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.467635 4979 scope.go:117] "RemoveContainer" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" Jan 30 23:31:16 crc kubenswrapper[4979]: E0130 23:31:16.468023 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323\": container with ID starting with fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323 not found: ID does not exist" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.468153 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323"} err="failed to get container status \"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323\": rpc error: code = NotFound desc = could not find container \"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323\": container with ID starting with fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323 not found: ID does not exist" Jan 30 23:31:17 crc kubenswrapper[4979]: I0130 23:31:17.092407 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" path="/var/lib/kubelet/pods/ecf6ba99-f760-491b-95ed-71ae1b9e34b4/volumes" Jan 30 23:31:24 crc kubenswrapper[4979]: I0130 23:31:24.069344 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:31:24 crc kubenswrapper[4979]: E0130 23:31:24.070115 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:31:39 crc kubenswrapper[4979]: I0130 23:31:39.070349 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:31:39 crc kubenswrapper[4979]: I0130 23:31:39.517047 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6"} Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.699369 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/init-config-reloader/0.log" Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.888872 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/alertmanager/0.log" Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.895903 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/init-config-reloader/0.log" Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.961827 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/config-reloader/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.084945 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d46697d68-frccf_58f76ba6-bd87-414d-b226-07f7a8705fea/barbican-api/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.131588 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d46697d68-frccf_58f76ba6-bd87-414d-b226-07f7a8705fea/barbican-api-log/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.259341 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ff7d98446-pts46_0e21af86-2d45-409c-b692-97bc60c3d806/barbican-keystone-listener/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.306692 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ff7d98446-pts46_0e21af86-2d45-409c-b692-97bc60c3d806/barbican-keystone-listener-log/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.438267 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c85d579b5-svwjh_fd72817a-eff0-4fac-ba2b-040115385897/barbican-worker/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.454296 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c85d579b5-svwjh_fd72817a-eff0-4fac-ba2b-040115385897/barbican-worker-log/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.623493 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/ceilometer-central-agent/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.633065 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/ceilometer-notification-agent/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.662796 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/proxy-httpd/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.790402 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/sg-core/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.832822 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d24af8b-b86a-4604-82a5-e3d014dba7b5/cinder-api/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.854208 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d24af8b-b86a-4604-82a5-e3d014dba7b5/cinder-api-log/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.047195 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_c3e02f71-2ffc-45bb-9344-28ff1640cffd/probe/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.158847 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_c3e02f71-2ffc-45bb-9344-28ff1640cffd/cinder-backup/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.234887 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_88f999da-53cb-4370-ab43-2a6623aa6d51/probe/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.257000 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_88f999da-53cb-4370-ab43-2a6623aa6d51/cinder-scheduler/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.413989 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_3aa75164-0d7b-4b9a-a21d-2c5834956114/cinder-volume/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.467671 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_3aa75164-0d7b-4b9a-a21d-2c5834956114/probe/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.564212 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-689759d469-jqhxp_d2693393-b0b5-4009-9c45-80d154fa756c/init/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.785948 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-689759d469-jqhxp_d2693393-b0b5-4009-9c45-80d154fa756c/dnsmasq-dns/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.786421 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-689759d469-jqhxp_d2693393-b0b5-4009-9c45-80d154fa756c/init/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.815458 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_67c81730-0360-4ee7-a657-774bab3e5ce1/glance-httpd/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.953000 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_67c81730-0360-4ee7-a657-774bab3e5ce1/glance-log/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.996673 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a/glance-httpd/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.035727 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a/glance-log/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.220472 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-90f9-account-create-update-f758c_3b4b69e9-3082-4eac-a4c9-2fd308ed75bd/mariadb-account-create-update/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.229291 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-d7d58dff5-tjkx9_608b4783-d5c9-467f-9a08-9cd6bc0f0fa9/heat-api/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.422829 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-create-vjhff_e764deeb-609a-4c01-8e75-729988b54849/mariadb-database-create/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.455922 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7bf56f7748-njbm7_d72b8dbc-f35e-4aea-ab91-75be38745fd1/heat-cfnapi/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.652835 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-sync-lhhst_4e6a3c61-50ef-48b5-bcc0-ab3374693979/heat-db-sync/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.740808 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5998d4684d-smdfx_b2612383-27a6-4663-b45a-0aac825bf021/heat-engine/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.929284 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9b688f5c-2xlsg_d199303b-d615-40f9-a420-bfde359d8392/horizon-log/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.001070 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9b688f5c-2xlsg_d199303b-d615-40f9-a420-bfde359d8392/horizon/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.042703 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.063226 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.072818 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.079404 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.208151 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b988cf8cf-m4gbb_564a9679-372a-47bb-be3d-70b37a775724/keystone-api/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.223795 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7/kube-state-metrics/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.350992 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_74f9350b-6f51-40b4-85a5-be1ffad9eb0c/adoption/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.585276 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-998b6c5dc-s8h29_633158e6-5d40-43e2-a2c9-94e611b32d3c/neutron-api/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.678241 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-998b6c5dc-s8h29_633158e6-5d40-43e2-a2c9-94e611b32d3c/neutron-httpd/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.893266 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ce01f4b-19ef-4c0b-ab4c-f76e96297fde/nova-api-log/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.912351 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ce01f4b-19ef-4c0b-ab4c-f76e96297fde/nova-api-api/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.045480 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_274c05f8-cb23-41d5-b911-5d13bac207a0/nova-cell0-conductor-conductor/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.080639 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" path="/var/lib/kubelet/pods/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd/volumes" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.081388 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e764deeb-609a-4c01-8e75-729988b54849" path="/var/lib/kubelet/pods/e764deeb-609a-4c01-8e75-729988b54849/volumes" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.220121 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1/nova-cell1-conductor-conductor/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.318762 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f517549b-f450-42f3-9445-6b45713a7328/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.445647 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a1269d92-1612-453c-8e80-29981ced4aca/nova-metadata-metadata/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.472927 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a1269d92-1612-453c-8e80-29981ced4aca/nova-metadata-log/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.696546 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/init/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.730423 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b6d75777-1cab-4bbc-ab03-361b03c488f4/nova-scheduler-scheduler/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.892169 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/init/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.947705 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/octavia-api-provider-agent/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.105249 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/octavia-api/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.156564 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-pbxbw_e7a38a33-332b-484f-a620-5ecc2b52d9d8/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.302993 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-pbxbw_e7a38a33-332b-484f-a620-5ecc2b52d9d8/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.346995 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-pbxbw_e7a38a33-332b-484f-a620-5ecc2b52d9d8/octavia-healthmanager/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.396654 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-89w6g_82154ec9-1201-41a2-a0f2-904b2db3c497/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.698699 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-89w6g_82154ec9-1201-41a2-a0f2-904b2db3c497/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.727401 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-89w6g_82154ec9-1201-41a2-a0f2-904b2db3c497/octavia-housekeeping/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.769634 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-p7ttv_e59aa6da-4048-4cf0-add7-cb98472425cb/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.909114 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-p7ttv_e59aa6da-4048-4cf0-add7-cb98472425cb/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.942323 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-p7ttv_e59aa6da-4048-4cf0-add7-cb98472425cb/octavia-rsyslog/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.003598 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-m8s2f_81ae9dc0-5b82-4990-878a-9570fc849c26/init/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.181955 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-m8s2f_81ae9dc0-5b82-4990-878a-9570fc849c26/init/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.273689 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dad08bf-c93b-417a-aeef-633e774fffcc/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.300701 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-m8s2f_81ae9dc0-5b82-4990-878a-9570fc849c26/octavia-worker/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.530052 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dad08bf-c93b-417a-aeef-633e774fffcc/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.535258 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dad08bf-c93b-417a-aeef-633e774fffcc/galera/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.798437 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_20a89776-fed1-4db4-80e6-11cfdb8f810b/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.907156 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_20a89776-fed1-4db4-80e6-11cfdb8f810b/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.965396 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_20a89776-fed1-4db4-80e6-11cfdb8f810b/galera/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.067082 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_278b06cd-52af-4fce-b0e8-fd7f870b0564/openstackclient/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.198397 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kssd2_2524172b-c864-4a7f-8c66-ffd219fa7be6/ovn-controller/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.314070 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-56vn2_927cfb5e-5147-4154-aad7-bd9d4aae47b2/openstack-network-exporter/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.722437 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovsdb-server-init/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.732685 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovsdb-server/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.733456 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovsdb-server-init/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.738878 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovs-vswitchd/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.935695 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_43991b8d-f7aa-479c-9d38-e19114106e81/adoption/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.077099 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89760273-d9f8-4c51-8af9-4a651cadc92c/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.173702 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89760273-d9f8-4c51-8af9-4a651cadc92c/ovn-northd/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.311129 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6afaef21-c973-4ec1-ae90-f3c9b603f713/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.374958 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6afaef21-c973-4ec1-ae90-f3c9b603f713/ovsdbserver-nb/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.470209 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_7fbe256e-5861-4bd2-b76d-a53f79b48380/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.611404 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_7fbe256e-5861-4bd2-b76d-a53f79b48380/ovsdbserver-nb/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.683630 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_977a1b80-05e8-4d3c-acbb-e9ea09b98ab0/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.743901 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_977a1b80-05e8-4d3c-acbb-e9ea09b98ab0/ovsdbserver-nb/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.896873 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e971ad9f-b09c-4504-8caf-f6c9f0801e00/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.975055 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e971ad9f-b09c-4504-8caf-f6c9f0801e00/ovsdbserver-sb/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.051317 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_755c668a-a4c9-4a52-901d-338208af4efb/openstack-network-exporter/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.135493 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_755c668a-a4c9-4a52-901d-338208af4efb/ovsdbserver-sb/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.258059 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_b0076344-a5b2-4fef-8a6f-28b6194b850e/openstack-network-exporter/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.302255 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_b0076344-a5b2-4fef-8a6f-28b6194b850e/ovsdbserver-sb/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.468047 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8565876748-g76rq_019fe9ef-3972-45a8-82ec-8b566d9a1c58/placement-api/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.556248 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8565876748-g76rq_019fe9ef-3972-45a8-82ec-8b566d9a1c58/placement-log/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.671731 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/init-config-reloader/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.749536 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4a63b89d-496c-4f6e-8ba3-a18de60230af/memcached/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.830360 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/config-reloader/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.846131 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/prometheus/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.879054 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/init-config-reloader/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.885717 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/thanos-sidecar/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.054498 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_291b372c-0448-4bc4-88a4-e61a412ba45a/setup-container/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.219423 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_291b372c-0448-4bc4-88a4-e61a412ba45a/setup-container/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.241245 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_291b372c-0448-4bc4-88a4-e61a412ba45a/rabbitmq/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.245672 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c14c3367-d6a7-443a-9c15-913f73eac121/setup-container/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.421881 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c14c3367-d6a7-443a-9c15-913f73eac121/rabbitmq/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.442869 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c14c3367-d6a7-443a-9c15-913f73eac121/setup-container/0.log" Jan 30 23:32:03 crc kubenswrapper[4979]: I0130 23:32:03.046424 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:32:03 crc kubenswrapper[4979]: I0130 23:32:03.057171 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:32:03 crc kubenswrapper[4979]: I0130 23:32:03.083659 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" path="/var/lib/kubelet/pods/4e6a3c61-50ef-48b5-bcc0-ab3374693979/volumes" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.281474 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-r2mb8_dcd08638-857d-40cd-a92c-b6dcef0bc329/manager/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.440420 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-5k7wd_9134e6d2-b638-49be-9612-be12250e0a6d/manager/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.490602 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-787499fbb-p95sz_11771b88-abd2-436e-a95c-5113a5bae88b/manager/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.655944 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/util/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.941868 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/pull/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.949792 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/util/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.976002 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/pull/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.162914 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/pull/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.199603 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/util/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.224230 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/extract/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.408808 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-h59f2_0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc/manager/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.536228 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bfc9d4d48-zqjfh_8893a935-e9c7-4d38-ae0c-17a94445475f/manager/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.623267 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-5pmpx_07393de3-4dbb-4de1-a7fc-49785a623de2/manager/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.870157 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-lrqnv_9c8cf87b-4069-497d-9fcc-3b7be476ed4d/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.132738 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7d96d95959-5s8xm_7f396cc2-4739-4401-9319-36881d4f449d/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.157412 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-9q469_5966d922-4db9-40f7-baf1-5624f1a033d6/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.185482 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-g6pnt_39f45c61-20b7-4d98-98af-526018a240c1/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.461419 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-6bb56_777d41f5-6e7f-4099-9f6f-aceaf0b972da/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.562999 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-v774d_31481495-f181-449a-887e-ed58bf88c783/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.824572 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-lz8dw_1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.904459 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-58s6k_73527aaf-5de3-4a3e-aa4c-f2ac98e5be11/manager/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.027498 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg_c9710f6a-7b47-4f62-bc11-9d5727fdb01f/manager/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.207617 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c7d885c49-dmwtw_9a874b50-c515-45d3-8562-05532a2c5adc/operator/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.451412 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jl5wf_bb59579b-3a3c-4ae9-b3fe-d4231a17e050/registry-server/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.741835 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-7f98k_cf2e278a-e0cb-4505-bd08-38c02155a632/manager/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.817889 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-6f7vv_82a19f5f-9a94-4b08-8795-22fce21897bf/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.327363 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-566d8d7445-78f4b_c15b97e5-3fe4-4f42-9501-b4c7c083bdbb/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.361915 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-r4rcx_788f4d92-590f-44b1-8b93-a15b9f88b052/operator/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.686475 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-57br8_baa9dff2-93f9-4590-a86d-cd891b4273f2/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.746629 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-69484b8d9d-nc5fg_bf959f71-8af9-4121-888f-13207cc2e1d0/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.787212 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b5794dddd-fgq92_cea237e7-6ca9-4dcd-b5d6-d471898e2c09/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.837274 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-dpkrg_2487dbd3-ca49-4b26-99e3-2c858b549944/manager/0.log" Jan 30 23:32:31 crc kubenswrapper[4979]: I0130 23:32:31.257670 4979 scope.go:117] "RemoveContainer" containerID="c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e" Jan 30 23:32:31 crc kubenswrapper[4979]: I0130 23:32:31.310092 4979 scope.go:117] "RemoveContainer" containerID="58163cfecf1e6d2fed441241595c3b510d0c4b0a9adfab7fece442a3238e97f0" Jan 30 23:32:31 crc kubenswrapper[4979]: I0130 23:32:31.338486 4979 scope.go:117] "RemoveContainer" containerID="4bf379d2ade37e9d1e0a22eab217d802e3a8854982275953c74bf158307b26eb" Jan 30 23:32:43 crc kubenswrapper[4979]: I0130 23:32:43.000208 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rthrv_6ebf43de-28a1-4cb6-a008-7bcc970b96ac/control-plane-machine-set-operator/0.log" Jan 30 23:32:43 crc kubenswrapper[4979]: I0130 23:32:43.213615 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mr5l2_7616472e-472c-4dfa-bf69-97d784e1e42f/kube-rbac-proxy/0.log" Jan 30 23:32:43 crc kubenswrapper[4979]: I0130 23:32:43.269749 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mr5l2_7616472e-472c-4dfa-bf69-97d784e1e42f/machine-api-operator/0.log" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.241837 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243003 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243077 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243099 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243109 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243156 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243181 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243191 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243203 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerName="container-00" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243213 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerName="container-00" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243226 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243235 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243258 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243267 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243602 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243640 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243664 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerName="container-00" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.246232 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.267693 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.392175 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.392328 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.392594 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495283 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495396 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495468 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495946 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.496303 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.516314 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.572120 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:48 crc kubenswrapper[4979]: I0130 23:32:48.103544 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:32:48 crc kubenswrapper[4979]: I0130 23:32:48.234160 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerStarted","Data":"2a447b25143664a67304d059de23e8f6fcf4bd430097e0e4ed2adaf1675e6d7c"} Jan 30 23:32:49 crc kubenswrapper[4979]: I0130 23:32:49.247842 4979 generic.go:334] "Generic (PLEG): container finished" podID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" exitCode=0 Jan 30 23:32:49 crc kubenswrapper[4979]: I0130 23:32:49.248311 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3"} Jan 30 23:32:50 crc kubenswrapper[4979]: I0130 23:32:50.259652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerStarted","Data":"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7"} Jan 30 23:32:51 crc kubenswrapper[4979]: I0130 23:32:51.271584 4979 generic.go:334] "Generic (PLEG): container finished" podID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" exitCode=0 Jan 30 23:32:51 crc kubenswrapper[4979]: I0130 23:32:51.271901 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7"} Jan 30 23:32:52 crc kubenswrapper[4979]: I0130 23:32:52.282368 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerStarted","Data":"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee"} Jan 30 23:32:52 crc kubenswrapper[4979]: I0130 23:32:52.305767 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwqd2" podStartSLOduration=2.920131849 podStartE2EDuration="5.30574957s" podCreationTimestamp="2026-01-30 23:32:47 +0000 UTC" firstStartedPulling="2026-01-30 23:32:49.25071219 +0000 UTC m=+6765.211959243" lastFinishedPulling="2026-01-30 23:32:51.636329931 +0000 UTC m=+6767.597576964" observedRunningTime="2026-01-30 23:32:52.298863014 +0000 UTC m=+6768.260110057" watchObservedRunningTime="2026-01-30 23:32:52.30574957 +0000 UTC m=+6768.266996603" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.051757 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-f88tb_99fcd41b-c557-4bf0-abbb-b189f4aaaf41/cert-manager-controller/0.log" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.187985 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-x57ft_34da3314-5047-419b-8c7b-927cc2f00d8c/cert-manager-cainjector/0.log" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.263065 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-pw6nw_7670008a-1d21-4255-8148-e85ac90a90d4/cert-manager-webhook/0.log" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.573194 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.573246 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.631674 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:58 crc kubenswrapper[4979]: I0130 23:32:58.379892 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:58 crc kubenswrapper[4979]: I0130 23:32:58.422839 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.348420 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwqd2" podUID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerName="registry-server" containerID="cri-o://19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" gracePeriod=2 Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.912206 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.964279 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.964365 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.964435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.965149 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities" (OuterVolumeSpecName: "utilities") pod "760f6d0d-ff72-4a55-957d-71d0d72a8fe3" (UID: "760f6d0d-ff72-4a55-957d-71d0d72a8fe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.972054 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9" (OuterVolumeSpecName: "kube-api-access-d8kc9") pod "760f6d0d-ff72-4a55-957d-71d0d72a8fe3" (UID: "760f6d0d-ff72-4a55-957d-71d0d72a8fe3"). InnerVolumeSpecName "kube-api-access-d8kc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.982951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "760f6d0d-ff72-4a55-957d-71d0d72a8fe3" (UID: "760f6d0d-ff72-4a55-957d-71d0d72a8fe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.066685 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.067147 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.067165 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") on node \"crc\" DevicePath \"\"" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.363915 4979 generic.go:334] "Generic (PLEG): container finished" podID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" exitCode=0 Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.363995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee"} Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.364018 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.364092 4979 scope.go:117] "RemoveContainer" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.364075 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"2a447b25143664a67304d059de23e8f6fcf4bd430097e0e4ed2adaf1675e6d7c"} Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.394970 4979 scope.go:117] "RemoveContainer" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.395790 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.408413 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.421428 4979 scope.go:117] "RemoveContainer" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.481597 4979 scope.go:117] "RemoveContainer" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" Jan 30 23:33:01 crc kubenswrapper[4979]: E0130 23:33:01.482420 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee\": container with ID starting with 19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee not found: ID does not exist" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482473 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee"} err="failed to get container status \"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee\": rpc error: code = NotFound desc = could not find container \"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee\": container with ID starting with 19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee not found: ID does not exist" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482499 4979 scope.go:117] "RemoveContainer" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" Jan 30 23:33:01 crc kubenswrapper[4979]: E0130 23:33:01.482922 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7\": container with ID starting with 1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7 not found: ID does not exist" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482945 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7"} err="failed to get container status \"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7\": rpc error: code = NotFound desc = could not find container \"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7\": container with ID starting with 1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7 not found: ID does not exist" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482959 4979 scope.go:117] "RemoveContainer" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" Jan 30 23:33:01 crc kubenswrapper[4979]: E0130 23:33:01.483463 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3\": container with ID starting with a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3 not found: ID does not exist" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.483499 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3"} err="failed to get container status \"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3\": rpc error: code = NotFound desc = could not find container \"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3\": container with ID starting with a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3 not found: ID does not exist" Jan 30 23:33:03 crc kubenswrapper[4979]: I0130 23:33:03.079926 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" path="/var/lib/kubelet/pods/760f6d0d-ff72-4a55-957d-71d0d72a8fe3/volumes" Jan 30 23:33:10 crc kubenswrapper[4979]: I0130 23:33:10.815699 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-84fjt_4e67f5da-565e-4850-ac22-136965b5e12d/nmstate-console-plugin/0.log" Jan 30 23:33:10 crc kubenswrapper[4979]: I0130 23:33:10.984628 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nqwmx_f03646b0-8776-45cc-9594-a0266af57be5/kube-rbac-proxy/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.047280 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2xs54_2bf07cc3-611c-44b3-9fd0-831f5b718f11/nmstate-handler/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.073088 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nqwmx_f03646b0-8776-45cc-9594-a0266af57be5/nmstate-metrics/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.265623 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-f7cxj_63bf7e31-b607-4b21-9753-eb05a7bfb987/nmstate-webhook/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.301803 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tv5t2_949791a2-d4bd-4ec8-8e34-70a2d0af1af1/nmstate-operator/0.log" Jan 30 23:33:25 crc kubenswrapper[4979]: I0130 23:33:25.745574 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-t8db4_be7dff91-b79d-4a99-a43b-9cc4a9894cda/prometheus-operator/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.187892 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w_a0c76d26-1e50-4da5-8774-dde557bb1c50/prometheus-operator-admission-webhook/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.204183 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d_800342ba-21de-4a0e-849e-695bd71885b9/prometheus-operator-admission-webhook/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.390350 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5c445_c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9/operator/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.427920 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-99mbt_b4d1f5a8-494c-4d68-ac75-0d7516cb7fca/perses-operator/0.log" Jan 30 23:33:39 crc kubenswrapper[4979]: I0130 23:33:39.737142 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6whjn_b9bf7d77-b99e-4190-8510-dd0778767e89/kube-rbac-proxy/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.010799 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.181638 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6whjn_b9bf7d77-b99e-4190-8510-dd0778767e89/controller/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.223683 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.261131 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.263109 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.349040 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.559122 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.563086 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.583655 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.628610 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.757191 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.765494 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.799114 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.823505 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/controller/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.931138 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/frr-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.985065 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/kube-rbac-proxy/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.049167 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/kube-rbac-proxy-frr/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.159679 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/reloader/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.294471 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5bgxv_f8932bcf-8e7b-4302-a623-ece7abe7d2e2/frr-k8s-webhook-server/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.449391 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68b5d74f6-krw7s_30c6b9df-d3aa-4a9a-807e-93d8b11c9159/manager/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.578719 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-545587bcb5-lxtf2_04d21772-3311-4f78-a621-a66fa5d1cb7d/webhook-server/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.758809 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v2nkx_6a083acc-78e0-41df-84ad-70c965c7bb5a/kube-rbac-proxy/0.log" Jan 30 23:33:42 crc kubenswrapper[4979]: I0130 23:33:42.434691 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v2nkx_6a083acc-78e0-41df-84ad-70c965c7bb5a/speaker/0.log" Jan 30 23:33:43 crc kubenswrapper[4979]: I0130 23:33:43.473635 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/frr/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.617442 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/util/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.825992 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/pull/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.826020 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/pull/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.835882 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.001720 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.015819 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.047143 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/extract/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.169352 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.317949 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.346989 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.347502 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.514676 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.570503 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/extract/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.573391 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.708917 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.901887 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.936014 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.942063 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.126113 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/extract/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.156294 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.219166 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.304168 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.470152 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.483550 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.487492 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.642214 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/extract/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.642669 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.695833 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.827834 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-utilities/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.970044 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-utilities/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.977230 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.001697 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.172934 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.257733 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-utilities/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.383056 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-utilities/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.588088 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-utilities/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.631526 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.679656 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.748691 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/registry-server/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.835716 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.850884 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.045786 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nzltj_ea935cc6-1adc-4763-bf1c-8c08fec3894f/marketplace-operator/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.223613 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.418299 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-content/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.490957 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.497444 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-content/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.617582 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.721646 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-content/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.850411 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-utilities/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.048767 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/registry-server/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.093383 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-content/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.100970 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/registry-server/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.119260 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-utilities/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.141577 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-content/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.308575 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-utilities/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.322374 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-content/0.log" Jan 30 23:34:01 crc kubenswrapper[4979]: I0130 23:34:01.169433 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/registry-server/0.log" Jan 30 23:34:02 crc kubenswrapper[4979]: I0130 23:34:02.039479 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:34:02 crc kubenswrapper[4979]: I0130 23:34:02.039538 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.376265 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-t8db4_be7dff91-b79d-4a99-a43b-9cc4a9894cda/prometheus-operator/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.399339 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w_a0c76d26-1e50-4da5-8774-dde557bb1c50/prometheus-operator-admission-webhook/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.414819 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d_800342ba-21de-4a0e-849e-695bd71885b9/prometheus-operator-admission-webhook/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.551595 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-99mbt_b4d1f5a8-494c-4d68-ac75-0d7516cb7fca/perses-operator/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.559069 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5c445_c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9/operator/0.log" Jan 30 23:34:32 crc kubenswrapper[4979]: I0130 23:34:32.039518 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:34:32 crc kubenswrapper[4979]: I0130 23:34:32.040236 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.039746 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.040673 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.040732 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.041804 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.041882 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6" gracePeriod=600 Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.617893 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6" exitCode=0 Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.618067 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6"} Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.618527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6"} Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.618555 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.007464 4979 generic.go:334] "Generic (PLEG): container finished" podID="a9f91df2-3eb9-4624-a492-49e62aa440f5" containerID="bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd" exitCode=0 Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.007585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerDied","Data":"bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd"} Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.008955 4979 scope.go:117] "RemoveContainer" containerID="bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd" Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.830747 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/gather/0.log" Jan 30 23:35:46 crc kubenswrapper[4979]: I0130 23:35:46.701069 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:35:46 crc kubenswrapper[4979]: I0130 23:35:46.703386 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hvtn9/must-gather-w5l49" podUID="a9f91df2-3eb9-4624-a492-49e62aa440f5" containerName="copy" containerID="cri-o://cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70" gracePeriod=2 Jan 30 23:35:46 crc kubenswrapper[4979]: I0130 23:35:46.716815 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.109070 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/copy/0.log" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.109841 4979 generic.go:334] "Generic (PLEG): container finished" podID="a9f91df2-3eb9-4624-a492-49e62aa440f5" containerID="cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70" exitCode=143 Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.246465 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/copy/0.log" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.247115 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.381485 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"a9f91df2-3eb9-4624-a492-49e62aa440f5\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.381587 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"a9f91df2-3eb9-4624-a492-49e62aa440f5\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.391558 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw" (OuterVolumeSpecName: "kube-api-access-9tmmw") pod "a9f91df2-3eb9-4624-a492-49e62aa440f5" (UID: "a9f91df2-3eb9-4624-a492-49e62aa440f5"). InnerVolumeSpecName "kube-api-access-9tmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.483493 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") on node \"crc\" DevicePath \"\"" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.553176 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a9f91df2-3eb9-4624-a492-49e62aa440f5" (UID: "a9f91df2-3eb9-4624-a492-49e62aa440f5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.585908 4979 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.123220 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/copy/0.log" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.124124 4979 scope.go:117] "RemoveContainer" containerID="cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.124329 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.159306 4979 scope.go:117] "RemoveContainer" containerID="bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd" Jan 30 23:35:49 crc kubenswrapper[4979]: I0130 23:35:49.080414 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f91df2-3eb9-4624-a492-49e62aa440f5" path="/var/lib/kubelet/pods/a9f91df2-3eb9-4624-a492-49e62aa440f5/volumes" Jan 30 23:36:31 crc kubenswrapper[4979]: I0130 23:36:31.524147 4979 scope.go:117] "RemoveContainer" containerID="f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925" Jan 30 23:37:02 crc kubenswrapper[4979]: I0130 23:37:02.039999 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:37:02 crc kubenswrapper[4979]: I0130 23:37:02.040565 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:37:32 crc kubenswrapper[4979]: I0130 23:37:32.042094 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:37:32 crc kubenswrapper[4979]: I0130 23:37:32.042570 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.040995 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.041686 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.041796 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.043351 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.043487 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" gracePeriod=600 Jan 30 23:38:02 crc kubenswrapper[4979]: E0130 23:38:02.172008 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.546685 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" exitCode=0 Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.546805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6"} Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.547170 4979 scope.go:117] "RemoveContainer" containerID="294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.548273 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:02 crc kubenswrapper[4979]: E0130 23:38:02.548870 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:16 crc kubenswrapper[4979]: I0130 23:38:16.070457 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:16 crc kubenswrapper[4979]: E0130 23:38:16.071633 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:28 crc kubenswrapper[4979]: I0130 23:38:28.070101 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:28 crc kubenswrapper[4979]: E0130 23:38:28.070899 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:41 crc kubenswrapper[4979]: I0130 23:38:41.070357 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:41 crc kubenswrapper[4979]: E0130 23:38:41.071442 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:54 crc kubenswrapper[4979]: I0130 23:38:54.070016 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:54 crc kubenswrapper[4979]: E0130 23:38:54.071442 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:39:09 crc kubenswrapper[4979]: I0130 23:39:09.069767 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:39:09 crc kubenswrapper[4979]: E0130 23:39:09.070533 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137240462024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137240463017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137222105016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137222105015453 5ustar corecore